entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
10
200
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
2
817k
http://arxiv.org/abs/2306.03132v1
20230605180003
Benchmarking Gaia DR3 Apsis with the Hyades and Pleiades open clusters
[ "Wolfgang Brandner", "Per Calissendorff", "Taisiya Kopytova" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany [email protected] Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA The Gaia astrophysical parameters inference system (Apsis) provides astrophysical parameter estimates for several to 100s of millions of stars. We aim to benchmark Gaia DR3 Apsis. We have compiled ≈1500 bona fide single stars in the Hyades and Pleiades open clusters for validation of PARSEC isochrones, and for comparison with Apsis estimates. PARSEC stellar isochrones in the Gaia photometric system enable us to assign average ages and metallicities to the clusters, and mass, effective temperature, luminosity, and surface gravity to the individual stars. Apsis does not recover the single-age, single-metallicity characteristic of the cluster populations. Ages assigned to cluster members seemingly follow the input template for Galactic populations, with earlier-type stars systematically being assigned younger ages than later-type stars. Cluster metallicities are underestimated by 0.1 to 0.2 dex. Effective temperature estimates are in general reliable. Surface gravity estimates reveal strong systematics for specific ranges of Gaia BP-RP colours. We caution that Gaia DR3 Apsis estimates can be subject to significant systematics. Some of the Apsis estimates, like metallicity, might only be meaningful for statistical studies of the time-averaged Galactic stellar population, but are not recommended to be used for individual stars. Benchmarking Gaia DR3 Apsis with the Hyades and Pleiades open clusters Wolfgang Brandner1 Per Calissendorff2 Taisiya Kopytova1 Received 2 May 2023 / Accepted 5 June 2023 ======================================================================== § INTRODUCTION Precisely determined stellar parameters are the key to many astrophysical investigations. The Gaia astrophysical parameters inference system <cit.> was devised to extend the scientific content and potential of Gaia beyond its core astrometric mission of providing precise measurements of positions, parallaxes, and proper motions of more than 10^9 sources. Gaia Data Release 3 (DR3) includes astrophysical parameters of up to 470 million sources. The Apsis estimates are based on averaged low-resolution spectra in the Gaia BP and RP bands with a resolution of R=20 to 60 combined with Gaia photometry and astrometry, and on data from the Radial Velocity Spectrograph (RVS), which provides R≈11500 spectra in the wavelength range 846 to 870 nm for stars with G≲15.2 mag. From this stellar astrophysical parameters like effective temperature, age, mass, surface gravity log g, global metallicity [M/H] and elemental abundances, etc. are inferred <cit.>. Gaia Apsis promises to unlock vastly improved insights into a variety of research topics, like the Solar neighbourhood and young nearby moving groups, stellar structure and evolution, open and globular clusters, and the overall structure of the Milky Way with its distinct streams, bar, and spiral arms, and more diffuse disk and halo components. Precise stellar astrophysical parameters are also essential for the study of exoplanets, whose properties are in general determined relative to the properties of the stellar host <cit.>. As pointed out by <cit.>, the veracity of the Apsis methods can only be assessed on real data. This provides our motivation to use independently determined properties of bona fide single stars in the Hyades and Pleiades open clusters as external validations of — and benchmarks for — Gaia DR3 Apsis estimates. The structure of the paper is as follow. In Sect. <ref> we review the compilation of the single star sequence of the Hyades open cluster, and apply the same methodology to the Pleiades to validate PARSEC isochrones. In Sect. <ref> we compare cluster ensemble and individual stellar astrophysical properties with Apsis estimates. In Sect. <ref> we discuss the findings and potential explanations for some of the issues identified. We conclude in Sect. <ref> with recommendations on the utilisation of DR3 Apsis estimates and an outlook. § SINGLE STAR SEQUENCES IN THE HYADES AND PLEIADES At average distances of ≈45 pc <cit.> and ≈135 pc <cit.>, respectively, the Hyades and the Pleiades are among the most nearby open clusters. The vicinity ensures high quality Gaia astrometric, photometric, and spectroscopic measurements over a wide range of spectral types and stellar masses. Age estimates for the Hyades derived from isochrone fitting are in the range 500 to 800 Myr <cit.>. For the Pleiades the age estimates derived from isochrone fitting centre around 100 to 125 Myr (Table <ref>), which is in good agreement with age estimates based on the Lithium depletion boundary <cit.>. The stellar samples of candidate members of the open clusters are based on <cit.> for the Hyades, and on <cit.> for the Pleiades. In <cit.> we processed the Hyades sample by identifying and removing stars with dubious photometry in at least one of the Gaia DR3 G, BP, or RP bands. Next we used the Gaia Renormalized Unit Weight (RUWE) parameter to identify likely astrometric binaries. In a final step, likely photometric (i.e. blended) binaries were identified based on their distance from the median stellar sequence in colour-magnitude space, taking into account the uncertainties in absolute brightness and colour. This resulted in a sample of ≈600 bona fide single stars in the Hyades open cluster. In <cit.> we derived the astrophysical parameters for these stars by comparison with the best fitting family of isochrones from the PAdova and TRieste Stellar Evolution Code <cit.>. Here we apply the same methodology to the Pleiades. The sample of ≈1300 stars within 10 pc of the nominal centre of the Pleiades open cluster as defined by <cit.> includes 1191 sources with -0.2≤BP-RP<3.9 mag and -2.0≤G_ abs≤ 13.5 mag. After rejection of 112 stars with dubious Gaia photometry, and the identification of 171 stars as likely astrometric or photometric binaries, we retain a sample of 908 bona fide single stars. We employ the PARSEC CMD 3.7 web interface[http://stev.oapd.inaf.it/cmd] to obtain version 1.2S grids of non-rotating isochrones in the Gaia EDR3 photometric system for a range of ages and [M/H], and for a visual extinction of A_ V = 0.12 mag <cit.>[We attribute the higher A_ V=0.20 mag reported by <cit.> due to their use of strictly solar metallicity [M/H] = 0 isochrones.]. Absolute G_ abs magnitudes are based on the photo-geometric distance estimates provided by <cit.>. χ ^2 minimisation is used to find the best fitting isochrones to the 235 brightest (G_ abs≤ 8.0 mag) bona fide single upper-main-sequence stars. This yields lg(age [yr]) = 8.10 ± 0.07 and [M/H] = 0.07 ± 0.03 for the Pleiades open cluster. The colour-magnitude diagram (CMD) of ≈900 bona fide single stars in the Pleiades open cluster, and the family of best fitting PARSEC isochrones is shown in Fig. <ref>. Similar to the Hyades, PARSEC isochrones provide an excellent fit to the observed sequence for G_ abs⪅ 9.0 mag. For 9.0<G_ abs<11.0 mag, the isochrones predict fainter G magnitudes, respectively bluer BP-RP colours, by up to 0.15 mag. For still fainter, mid- to late M-dwarfs, the isochrones tend to predict too bright G magnitudes, respectively too red BP-RP colours (Fig. <ref>). For each star, we calculate its distances to the best fitting interpolated isochrone in colour-magnitude space to assign T_ eff, log g, log L, and (present-day) mass. The uncertainties in astrophysical parameters take the uncertainties in the Gaia measurements, and in the [M/H] and age estimates of the isochrone into account. The derived properties of the bona fide single stars in the Pleiades are listed in Table <ref>. The corresponding stellar properties for the Hyades open cluster are presented in Table 1 in <cit.>. § COMPARISON WITH GAIA APSIS §.§ Cluster ensemble properties: Age and Metallicity A defining characteristics of stellar populations in open clusters is their small spread in age and abundances <cit.>. In order to test how well Apsis recovers these essential properties, we compare the estimates for individual stars with the ensemble average for each cluster as derived by isochrone fitting. Apsis age estimates originate in the Final Luminosity Age and Mass Estimator (FLAME) module <cit.>. FLAME in turn relies on input both from the Generalized Stellar Parametrizer – Photometry (GSP-Phot) and the Generalized Stellar Parametrizer – Spectroscopy (GSP-Spec). In Fig. <ref> we plot FLAME mean age estimates along with the lower 16th and 84th quantile for 109 stars in the Hyades and 186 stars in the Pleiades against their BP-RP colour. FLAME lower and upper age estimates are capped between 0.2 and 13.5 Gyr[We note that this contradicts the lower limit of 0.1 Gyr quoted in the Gaia DR3 documentation release version 1.2 (7. Feb. 2023), section 11.3.6]. For both clusters, the FLAME age estimates reveal a strong correlation with colour for BP-RP ≤ 0.8 mag. For BP-RP > 1.0 mag the majority of age estimates tend to scatter around ages of ≈11 Gyr. Weighted-mean uncertainties yield < age_FLAME>_ wm = 1.73± 0.04 Gyr for the Hyades and < age_FLAME>_ wm = 0.26± 0.01 Gyr for the Pleiades. The Apsis GSP-Phot and GSP-Spec modules independently provide [M/H] estimates. In order to test for their consistency, we identified all stars, which have [M/H] estimates from both modules. In Fig. <ref> we plot the [M/H] estimates along with diagonal dashed lines marking a 1:1 correspondence. GSP-Phot [M/H] are the original published Apsis values. A recalibration using Code Version 1.6 of the gdr3apcal Python package[ codebase: https://github.com/mpi-astronomy/gdr3apcal] according to <cit.> results in revised [M/H] estimates for Pleiades stars with a wider intrinsic spread, and which are on average 1.0 dex lower than the original Apsis estimates. As a low average [M/H]≈ -1.1 appears non-physical for member stars of the Pleiades <cit.>, we reject the re-calibration of the GSP-Phot [M/H]. GSP-Spec [M/H] estimates are re-calibrated according to <cit.>, and using the polynomial coefficients applicable to open clusters. This results in an average correction of ≈0.09 dex towards higher [M/H] values. §.§ Individual stellar properties: T_ eff, log g, mass In Fig. <ref>, top, we plot the Apsis deduced effective temperatures for stars in the Hyades as a function of their BP-RP colour. On the left we show the GSP-Phot derived T_ eff, which reveals a monotonic and relatively smooth decline with increasing colour up to BP-RP⪅1.35 mag. In the range 1.35<BP-RP<3.0 mag, the GSP-Phot temperature estimates bifurcate. Overplotted in blue is the T_ eff vs. colour relation according to the best fitting PARSEC isochrone. Compared to the isochrone, GSP-Phot T_ eff estimates are systematically lower by 150 to 250 K for BP-RP⪅1.35 mag, while the bifurcated GSP-Phot T_ eff estimates for BP-RP>1.35 mag seemingly bracket the temperature according to the isochrone. For BP-RP>3.0 mag the GSP-Phot T_ eff estimates are systematically higher by ≈50 K than implied by the isochrone. On the right, we show GSP-Spec T_ eff, which also reveals a bifurcation for the colour range 1.35<BP-RP<1.65 mag. For reference, the corresponding stellar masses according to the isochrone are indicated on the top abscissa. Shown in the bottom of Fig. <ref> are effective temperature vs. colour for the Pleiades. Noticeable is the apparent gap in GSP-Phot derived T_ eff in the range 8500 to 9500 K with the majority of the stars with BP-RP<0.3 mag being assigned T_ eff>9500 K. For redder colours 0.3<BP-RP<1.3 mag GSP-Phot T_ eff falls by a few 100 K systematically below – and for BP-RP>1.7 mag above – the colour-temperature relation of the best fitting PARSEC isochrone. We note that a few stars in the BP-RP colour range from 1.0 to 2.0 mag have GSP-Phot temperature which fall significantly above the isochrone relation. The GSP-Spec module could deduce effective temperatures only for 115 of the bona fide single stars in the Pleiades. While the majority follows closely the T_ eff–colour relation of the best fitting isochrone, a few stars stand out with having significantly lower GSP-Spec temperatures. In Fig. <ref>, top, we plot the Apsis deduced surface gravity for stars in the Hyades as a function of their BP-RP colour. On the left we show the GSP-Phot derived log g. Similar to the T_ eff estimates, log g follows a smooth, monotonic relation with colour for BP-RP⪅1.35 mag. In the colour range 1.35<BP-RP<3.0 mag, GSP-Phot log g estimates bifurcate. For BP-RP<3.0 mag GSP-Phot log g [cm/s^2] estimates are typically 0.1 to 0.2 lower than suggested by the isochrone, while they fall above the isochrone value by ≈0.03 for redder colours. On the right we show the re-calibrated GSP-Spec log g estimates according to <cit.>. On average the recalibration lowers log g [cm/s^2] by ≈0.3 compared to the original DR3 Apsis estimates (e.g. from 5.0 to 4.7). The log g [cm/s^2] values stay almost constant at 4.7 for 0.7<BP-RP<2.2 mag, with the exception of a dip in the range 1.35<BP-RP<1.70 mag. Shown in the bottom of Fig. <ref> are the corresponding plots for the Pleiades. GSP-Phot log g [cm/s^2] estimates for BP-RP<1.7 mag fall below the isochrone by up to 0.2. In the colour range 1.9<BP-RP<3.0 mag GSP-Phot log g estimates bifurcate. For red colours (BP-RP>3.0 mag) the GSP-Phot log g estimates are systematically higher than suggested by the best fitting isochrone. Akin to the single stars in the Hyades, the re-calibrated GSP-Spec log g [cm/s^2] estimates for Pleiades members tend to cluster around 4.7. § DISCUSSION OF BENCHMARKING RESULTS §.§ Age and FLAME The Apsis FLAME module is tuned using supervised learning <cit.> based on templates, which include a model of the Milky Way. Stars get assigned a population identification (popID) derived from the age bins defined by <cit.> for the different Galactic populations, i.e. thin disk (popID 0 to 6), thick disk (popID 7), halo (popID 8), plus additional popIDs for bulge stars, stars belonging to the Magellanic Clouds, or to open clusters. The apparent age-colour relation in Fig. <ref> suggests that the Apsis age inference is dominated by the model of the underlying Galactic population. FLAME seems to assign stars a popID, and hence an "age" primarily based on their effective temperature. FLAME uses a solar [M/H] prior <cit.>, which should not be a major limitation towards its applicability to the near-solar [M/H] of the Hyades ([M/H] = 0.18±0.03) and Pleiades ([M/H] = 0.07±0.03) open clusters. Of note is a potential mismatch between different stellar evolutionary models employed by GSP-Phot, which provides inputs to FLAME, and FLAME itself. According to <cit.> GSP-Phot employs PARSEC 1.2S Colibri S37 models <cit.>, i.e the same set of models we use for the determination of the stellar parameters of the single stars in the Hyades <cit.> and Pleiades. FLAME itself, though, employs Bag of Stellar Tracks and Isochrones <cit.>. Figure 12 in <cit.> highlights the close match of BaSTI and MIST isochrones <cit.>, and the mismatch between BaSTI and PARSEC isochrones for young late-type stars of approximately solar abundance. As discussed in <cit.>, MIST models tend to systematically predict fainter absolute magnitudes G_ abs, respectively predict bluer BP-RP colours for bona fide single stars in the Hyades with T_ eff≤ 4700 K than is actually observed by Gaia DR3. PARSEC isochrones provide a considerably better fit to the observed single star sequence <cit.>. A further limitation of Gaia DR3 FLAME is the apparent lower age limit of 200 Myr. This precludes a proper age dating of Pleiades members. In summary, individual FLAME age estimates seem of very limited informative power. §.§ Metallicity and GSP-Phot/GSP-Spec In addition to the low resolution spectroscopic information in the BP and RP bands, GSP-Phot [M/H] estimates also incorporate Gaia parallax measurements, and use both theoretical model spectra and isochrones <cit.>. <cit.> caution against the use of GSP-Phot [M/H] estimates “without further investigation”, and state that [M/H] estimates appear on average too low by 0.2 dex when compared to literature data for sources with [M/H]>-1 dex. This is in agreement with our findings. Both for the Hyades and Pleiades samples we find that the weighted mean [M/H] estimates of the bona fide single stars with GSP-Phot [M/H] estimates fall below the estimates derived from families of best fitting isochrones by 0.15 to 0.2 dex (Fig. <ref>). GSP-Phot [M/H] estimates of individual stars, though, exhibit a much larger scatter than supported by the 16% and 84% quantiles of the Markov-Chain Monte-Carlo results reported in Apsis, suggesting that the uncertainties in the parameter estimates for individual stars are underestimated by a factor of 10 to 20. A re-calculation of [M/H] using the Python package gdr3apcal (Code Version 1.6) makes the discrepancy worse. Thus we agree with the recommendation by <cit.> to not use GSP-Phot [M/H] estimates of individual stars. GSP-Spec [M/H] estimates are based on Gaia RVS and grids of model spectra spanning the full range of Galactic stellar populations. The inherent estimation bias of this module has been quantified by <cit.>, who also provide a prescription for correction. The correction moves the ensemble [M/H] estimates of Apsis to within ≈0.1 dex to the [M/H] values suggested from the isochrone fitting to the Hyades and Pleiades. For individual stars, in particular in the Hyades, GSP-Spec [M/H] (minimal) quantile uncertainties appear to be underestimated, indication that the actual noise floor of the high S/N estimates is considerably higher. While overall the GSP-Phot and GSP-Spec estimates show a weak correlation, individual estimates tend to disagree by substantially more than suggested by the assigned uncertainties. In general, neither GSP-Phot nor GSP-Spec are able to reproduce the [M/H] estimates derived from the isochrone fitting, which are marked by blue open squares. The apparent close match of the weighted mean < [M/H]_GSP-Phot>_ wm = 0.1720± 0.001 for the Hyades is accidental, as it is the result of a selection of stars with both GSP-Phot and GSP-Spec estimates. A sample of 213 stars in the Hyades with GSP-Phot [M/H] estimates, and including 96 stars without GSP-Spec estimates, yields < [M/H]_GSP-Phot>_ wm = 0.034± 0.001. Overall, both GSP-Phot and GSP-Spec tend to systematically underestimate [M/H] of the stars in the Hyades and Pleiades. §.§ Effective temperature and GSP-Phot/GSP-Spec As the spectral characteristics of stars are primarily determined by their effective temperature, it is no surprise that this is also the most robust of the DR3 Apsis parameter estimates. Still GSP-Phot T_ eff estimates seem to systematically fall by 150 K to 250 K below the T_ eff - colour relation suggested by the best fitting isochrones for BP-RP<1.35 mag. Surprising are an apparent bifurcation of T_ eff estimates for stars in the Hyades with 1.35 mag<BP-RP<3.0 mag, and the apparent dearth of stars with 8000 K<T_ eff <9500 K in the Pleiades. Apart from the systematic offsets, the GSP-Phot T_ eff quantile uncertainties appear plausible (Fig. <ref>, left). Overall GSP-Spec estimates are in better agreement with the T_ eff-colour relation according to the PARSEC isochrone than GSP-Phot estimates, though estimates start to diverge for BP-RP>1.35 mag (Fig. <ref>, right). §.§ Surface gravity and GSP-Phot/GSP-Spec GSP-Phot log g estimates overall follow the log g - colour relation suggested by the best fitting isochrones. Of notice is a bifurcation in the log g estimates in the colour range 2.0 mag<BP-RP<3.0 mag, and a systematic underestimate of log g for BP-RP<1.5 mag (Fig. <ref>, left). GSP-Spec log g [cm/s^2] estimates, which have been re-calibrated according to the prescription provided by <cit.> are almost constant at 4.7 over a wide colour range, with a strange dip to ≈4.4 at BP-RP≈1.4 mag for stars in the Hyades. We suspect that the lack of a dimension for rotational line broadening in the GSP-Spec model grid might force the parameter estimates in particular for fast rotating stars in the Pleiades to higher apparent log g values (Fig. <ref>, right). §.§ Limitations of our method While the isochrone fits, and the deduction of stellar astrophysical parameters from the isochrones provide robust estimates of stellar properties, there are also some inherent limitations to our approach: i) the χ^2 isochrone fitting is based on the assumption of uniform extinction to all stars in either of the clusters.[We note that due to their vicinity, the line-of-sight extinction is very small towards the Hyades (A_ V=3 mmag), and modest for the Pleiades (A_ V=120 mmag).] Apsis GSP-Phot, on the other hand, fits individual line-of-sight extinction estimates for each star. While this potentially provides more accurate fits, it also opens an additional dimension for fitting errors due to a strong correlation between the uncertainties in T_ eff and A_ V estimates <cit.>. ii) our bona fide single star sequences are still contaminated by sources with hidden binarity. These are typically close binary systems with mass ratios and/or brightness ratios <1:2. We expect the contamination to be more severe for the Pleiades sample due to its ≈3 times larger distance, which reduces the sensitivity to binary detection and rejection based on the RUWE value. Still, the bias induced by the fainter and redder secondary on the parameter estimation in such system should be modest, and cannot explain the systematics encountered in the Apsis estimates. iii) we employ non-rotating isochrones. Stellar rotation affects in particular the upper main sequence and main sequence turn-off region in the Hyades and Pleiades, which are the regions also most sensitive to the isochronal ages. For the majority of the stars in our sample, the effect on the determined astrophysical parameters should be minimal, though. iv) the PARSEC models follow a fixed Y(Z) abundance relation. This, however, affects both our estimates and Apsis. We note that a deviation of the [Y/Z] abundance ratio from the relation would in particular change the energy production rate of the low mass stars <cit.> § RECOMMENDATIONS AND OUTLOOK While Gaia offers unprecedented quality and precision in its astrometric, photometric, and spectroscopic measurements, we identify tension between Gaia DR3 Apsis estimates and astrophysical properties deduced by us for ≈1500 bona fide single stars in the Hyades and Pleiades open clusters. Apsis provides very good to fair estimates on the effective temperature of stars, and potentially on the ensemble [M/H] of homogeneous stellar groups. Both GSP-Phot and GSP-Spec, though, seem to underestimate the ensemble [M/H] of two nearby open clusters by 0.1 to 0.2 dex. As recommended by <cit.>, one should not use the GSP-Phot [M/H] for individual stars. GSP-Phot log g in general is sufficiently precise to deduce luminosity classes, while GSP-spec log g [cm/s^2] for stars in the open clusters converge around 4.7 over a larger range of spectral types. Our assessment of Gaia DR3 Apsis and the resulting recommendations are summarized in Table <ref>. We also have identified a few potential issues in the underlying assumptions and training sets used for Gaia DR3 Apsis: i) GSP-Phot employs PARSEC isochrones. FLAME relies on BaSTI models, whose isochrones closely resemble MESA isochrones, but disagree with PARSEC isochrones for solar metallicity stars with masses ≲0.7 M_⊙. This model mismatch might bias FLAME estimates ii) The GSP-Spec spectral model grid lacks a dimension for rotational broadening, hence fast rotating stars might get assigned too high log g estimates iii) FLAME age estimates reproduce the training model. The ages start at 100 Myr according to the online documentation, while the youngest (16th quantile) ages assigned to Pleiades members are 200 Myr. FLAME lacks the ability to classify the youngest (age ≲150 Myr) Galactic population (popID = 0 according to <cit.>). Addressing these issues might help to improve the quality of Apsis estimates. Overall, DR3 Apsis seems to assign too small uncertainties to its estimates. Thus Gaia DR3 Apsis cannot replace more dedicated efforts to determine stellar properties <cit.>, or, e.g., provide the level of accuracy required for the characterisation of exoplanet host stars. We expect that the sample of 1500 bona fide single stars in the Hyades <cit.> and Pleiades open clusters with homogeneous astrophysical parameters estimates derived from PARSEC evolutionary models might also be highly useful for future benchmarks of theoretical and observational studies. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. We acknowledge the use of TOPCAT <cit.>. aa
http://arxiv.org/abs/2306.10557v3
20230618134122
The chow weight structure for geometric motives of quotient stacks
[ "Dhyan Aranha", "Chirantan Chowdhury" ]
math.AG
[ "math.AG" ]
arrows,backgrounds, decorations.pathreplacing, decorations.pathmorphing matrix,arrows equationsubsection subsection.equation PropositionProposition[section] Lemma[Proposition]Lemma Theorem[Proposition]Theorem Claim[Proposition]Claim Corollary[Proposition]Corollary definition Definition[Proposition]Definition Construction[Proposition]Construction Remark[Proposition]Remark Convention[Proposition]Convention Notation[Proposition]Notation Example[Proposition]Example Problem[Proposition]Problem Warning[Proposition]Warning introthminn introthm[1] #1 #1. Dif Gra Hom coker ker Im B Ext ext ℓis-ext Tor^R res mult Tot Ch Cone Abelian Vect 𝕀id pr op Perf p RHom ıi Morph Ho ggrMod- Mod rank dim det tr colim lim Cat Grpd depth Aut Art Spec B Bl Proj cdga X̂ M^o_π_0 (X) Y Coh QCoh Sp cn ℓis-ét ét Exc Fun Lat cofib fib map CRing CAlg CMon h Sym S Set Sh CSh_a PSh LKE Aff Stk Pair PrStk Art RelArt Ab Čech AlgSp SCRMod cofib fib CRing APerf CH Pic CHM Corr DM SH Cat Pr^L Pr^R stb gm Nis-locSt Ind H Map §.§ startsectionsubsection1 @.7plus.5 === -2.5in Dhyan Aranha, Chirantan Chowdhury]Dhyan Aranha, Chirantan Chowdhury [email protected],[email protected] The Chow Weight structure for geometric motives of quotient stacks]The Chow weight structure for geometric motives of quotient stacks We construct the Chow weight structure on the derived category of geometric motives _ ([X/G], Λ) for X a quasi-projective scheme over a field characteristic 0, G an affine algebraic group and Λ an arbitrary commutative ring. In particular we also show that the heart of this weight structure recovers the category of Chow motives on [X/G]. [ [ ===== § INTRODUCTION The notion of weight structure on a triangulated category was introduced by Bondarko in <cit.> and independently by Pauksztello <cit.> (under the name of "co-t-structures"). In <cit.> and <cit.> Chow weight structures for the derived category of Beilinson motives where constructed and in <cit.> Chow weight structures for _cdh (- ,Λ) were constructed where Λ is a general ring such that the characteristic of the base is invertible. The motivation for this note comes from the works of <cit.>Rem. II.4.15] and <cit.>[Rem. 4.8], where it is asked if there is a general way to put a Chow weight structure on derived category of (geometric) motives for quotient stacks. In this article we propose a definition for _ ([X/G] ,Λ), the derived category of geometric motives over a stack [X/G] where X is assumed to be quasi-projective (<ref>). Roughly speaking, _([X/G], Λ) is the thick subcategory of ([X/G], Λ) generated by (Tate twists of) motives of stacks which are smooth and quasi-projective over [X/G]. Our justification for this definition is that it is equivalent to the usual definition <cit.>[Def. 2.3] when G is trivial (<ref>). Our first main theorem is Theorem Suppose that = [X/G] where X is a quasi-projective scheme over a field k of characteristic 0 and G is an affine algebraic group acting on X. Let Λ be any commutative ring. Then the ∞-category of geometric motives _ (, Λ) admits a Chow weight structure w_Chow. The reason we call the weight structure constructed in Theorem <ref>, the Chow weight structure is because of our second main theorem Theorem Suppose we are in the setup of Theorem <ref>. Then there is an equivalence _ (, Λ)^_w≃(, Λ), where (, Λ) denotes the category of classical Chow motives over . (see Definition <ref>). Theorem <ref> appears as <ref> in the main text. The assumption that the field k is of characteristic zero in Theorem <ref> is needed in two places in this work: Firstly, in order to prove the existence of proper *-pushforwards for _ (- , Λ) we rely on the results of <cit.>. Secondly, because we need to use G-equivariant resolution of singularities. In particular to get this result in characteristic p, different arguments are needed and we think this is an interesting question. In fact in both <cit.> and <cit.> weight structures were constructed on various subcategories of ([X/G], Λ) under various assumptions on the action of G on X. We believe that with an appropriate version of Chow's lemma for stacks one can show that our category _ ([X/G], Λ) is equivalent to the category ^Spr_G (X, Λ) of <cit.> when the base field k is of characteristic 0. We now give a general outline of the article. In <ref> we introduce the ∞-category (, Λ) and its six-functor formalism for a general class of stacks (so called Nis-loc stacks <cit.>). We also explain the equivalence (, Λ) ≃_HΛ (()) when is Nis-loc, which is presumably well known to experts, but we could not find a reference for this in the literature. In <ref> we record various descent results which will be used to construct the Chow weight structure. Most important, will be the fact that (- ,Λ) on the category of Nis-loc stacks has cdh-descent (see <ref>). This will be used together with the existence of equivariant resolutions of singularities in characteristic 0 to show that the ∞-category of Chow motives (see <ref>) generates the derived category of geometric motives for a quotient stack in a suitable sense. In <ref> we introduce the category of geometric motives, _(, Λ), and consider how various operations in the six functor formalism on ( , Λ) restrict to _ (, Λ). One of the main results in this section is that _ (, Λ) is stable under projective *-pushforwards. This result relies crucially on the results of <cit.>. The other important result in this section is that we show that the ∞-category of Chow motives (, Λ) generates _ (, Λ): <ref>. In <ref> we explain the connectivity of the mapping spectrum between any two objects of the ∞-category of Chow motives of a quotient stack. Along the way we will also explain the equivalence π_0 _(, Λ) (1_ (s)[2s+t], f^!1_B) ≃_s(, t)_Λ. for a quotient stack = [X/G] which is well known in the case that X is smooth <cit.>[Thm. 2.2.10] and <cit.>[12.4]. In <ref> we remind the reader about the definition of weight structures and prove the existence of the Chow weight structure for quotient stacks: <ref>. Finally in <ref> we explain the identification of the homotopy category of the weight heart of the Chow weight structure on _([X/G], Λ) with the classically defined category of Chow motives. As expected in <cit.>[II.4.15], when = BG for G a linear algebraic group over a field k and Λ = 𝐐, Theorem <ref> gives an identification of the weight heart of our weight structure with Laterveer's category category of equivariant motives <cit.> (See <ref>). §.§ Acknowledgements We would first like to take the opportunity to thank Alessandro D'Angelo for many conversations about the material in this note and for pointing out an important mistake in an earlier incarnation of this work. We would also like to thank Marc Levine for patiently answering several asinine questions about motivic homotopy theory and Jochen Heinloth for many conversations about stacks. §.§ Notation We will denote stacks by the letters X, Y, Z.. etc. and denote schemes/algebraic spaces by the letters X, Y Z.. etc. All of our geometric objects will live over a base scheme B = (k) where k is an algebraically closed field. We will denote by Λ a arbitrary commutative ring, such that if char(k) = p > 0 we assume that p is invertible in Λ. We will assume all of our stacks have affine diagonal and are of finite type over B. In particular by <cit.>[Thm. 1.2] all our stacks are Nis-loc (see <ref>). We will often still refer to the Nis-loc stack hypothesis anyway in many statements to reassure the reader. § DM FOR ALGEBRAIC STACKS We will begin by recalling the construction of the category (, Λ) and it's six functor formalism. Given a finite type B-scheme S, and a commutative ring Λ, it follows from the work of <cit.> and <cit.> that there is a well defined motivic Eilenberg-MacLane spectrum HΛ_S. We make the following definition of the derived category of motives for finite type schemes over B. Let X be a finite type scheme over B, and Λ an arbitrary commutative ring. We define the derived category of motives with coefficients in Λ to be (X, Λ) := _HΛ_X ((X)). The category (X, Λ) is equivalent to the category _cdh (X, Λ) by <cit.>[Thm. 5.1]. In particular it has a six functor formalism which we will recall shortly. In the case that Λ = ℚ it follows from <cit.> that (X, Λ) is equivalent to the category of Beilinson motives. We now summarize the 6-functor formalism on schemes for (- , Λ) which follows from <cit.> and <cit.>. There is a functor ^* : (Sch^ft_B) ^→(_, Λ) from the category of finite type B-schemes to the ∞-category of presentable stable Λ-linear symmetric monoidal ∞-categories. Which sends X to the ∞-category (X, Λ), and f: X → Y to f^*. This defines a motivic category in the sense of <cit.> which has a six functor formalism: * For every f: X→ Y the monoidal functor f^* admits a right adjoint f_*. * For every smooth f : X → Y smooth, the functor f^* admits a left adjoint f_# * For every separated morphism f:X → Y there is a functor f^! : (Y, Λ) →(X, Λ) which admits a left adjoint f_!. * There are adjoint functors (⊗, ). * For f: X → Y, there exists a natural transformation α_f : f_! → f_* which is an equivalence when α is proper. * (Purity) For any smooth separated morphism f: X → Y of relative dimension d, there is a canonical natural isomorphism 𝔭_f: f^* ≃→ f^! (-d)[-2d]. * (Base Change) If (TL) at (0 , 1.2) X'; (TR) at (1.2, 1.2) Y'; (BL) at (0, 0)X; (BR) at (1.2,0)Y; [->] (TL) – (TR) node [midway, above] f'; [->] (TL) – (BL)node [midway, left] g'; [->] (TR) – (BR)node [midway, right] g; [->] (BL) – (BR) node [midway, below] f; is a cartesian square and g is separated, then we have the following canonical equivalences: g'_! f'^* ≃ f^* g_! f'_*g'^! ≃ g^! f_*. * (Projection formula) For any separated finite type morphism f: Y → X we have the following equivalences f_! (ℱ⊗ f^* (ℰ)) ≃ f_! (ℱ) ⊗ℰ f^! _(X, R) (ℰ, ℰ') ≃ _(Y, R)(f^*ℰ, f^!ℰ'). * (Localization) For i: Z ↪ X a closed immersion with open complement j: U ↪ X we have the following cofiber sequences i_! i^! →𝕀→ j_* j^* j_! j^! →𝕀→ i_* i^*. * (Absolute purity) For any closed immersion i: Z ↪ X between regular schemes of codimension c there is an isomorphism i^! 1_X ≃ 1_Z (-c)[-2c] In order to avoid overloading notation we will simply write (-, Λ) for ^*(-, Λ). Our goal now is to describe an extension of the functor (-, Λ) to a certain class of stacks introduced by Chowdhury <cit.> called Nis-loc stacks. We will first recall the definition. We say that an algebraic stack admits Nisnevich-local sections if there exists a morphism x : X → such that X is a scheme and for any morphism y : Y → with Y a scheme, the induced map x': X ×_ Y → Y admits Nisnevich-Local sections. We say that an algebraic stack is Nis-loc if there exists a smooth cover which admits Nisnevich-local sections. We will denote the ∞-category of Nis-loc stacks by The following example is from <cit.>[Cor. 2.3.6]. Let X be a finite type scheme over B and G an affine algebriac group. Then [X/G] is a Nis-loc stack. By <cit.>[Thm. 1.2] any quasi-separated, finite type algebraic stack over B with separated diagonal is Nis-loc. One can construct an extension of (-, Λ) to all locally finite type algebraic stacks over B by considering the so called lisse-extension as introduced in <cit.>[Constr. 12.1]. However, the Čech nerve of an arbitrary smooth cover will not be confinal in general and so we cannot compute this extension along arbitrary smooth covers. The reason for introducing the notion of Nis-loc stack is that they provide a convenient class of stacks where we can compute (- ,Λ) along Čech nerves of Nis-loc covers. The functor (-, Λ) extends to an ∞-sheaf _ (-, Λ): ^→(_, Λ). Moreover, for any ∈ with a schematic Nis-loc atlas π: X → we can compute _ (, Λ) on the Čech nerve of π. That is _ (, Λ) ≃( (X, Λ) ⇉ (X ×_ X , Λ) ⋯). The proof is the same as <cit.>[Cor. 2.5.1] with replaced by (- , Λ). Note that as is a Nisnevich sheaf, by <ref> we see that (-,Λ) is a Nisnevich sheaf so we can apply <cit.> in mimicking the proof of <cit.>. We emphasize that the key result <cit.>[Thm. 2.4.1] is proved in the generality of an arbitrary ∞-sheaf. Let be a Nis-Loc stack. We define the derived category of motives over with coefficients in an arbitrary commutative ring Λ to be (, Λ) : = _ (, Λ). We would like to take the opportunity to point out that the six functor formalism for of algebraic stacks has been considered in the literature in various places. In the case of quotient stacks there is <cit.>[Ch. I]. For general stacks over 𝐐 there is <cit.>, and for general coefficients <cit.> [12.1]. Also there is the work <cit.> in the setting of derivators. The next couple of remarks record how the category defined in <ref> compares with other constructions in the literature. As alluded too above, By <cit.>, one could just as well for an arbitrary locally finite type stack over B define _(, Λ) : = _(T,t)(T, Λ) where the limit is taken over the ∞-category Lis_ of pairs (T, t) where T is a scheme and t: T → is a smooth morphism. The same arguments used in <cit.>[Cor. 2.5.4] and <cit.>[Cor. 12.2.8] show that when is Nis-loc the categories _(, Λ) and (, Λ) are equivalent. We can also define for an arbitrary locally finite type stack over B the category ^!(, Λ) : = _Lis_^!(S, Λ) in _, Λ. When is Nis-loc, the purity isomorphism implies that (, Λ) ≃^! (, Λ). In particular when Λ = 𝐐 and is Nis-loc then <ref> agrees with the the derived category of motives constructed in <cit.>. Next we would like to give a more global description of (, Λ). That is we would like to describe (, Λ) as a category of modules in () over some motivic 𝐄_∞-ring spectrum. To do this we will first start with a purely categorical statement. Suppose that 𝒞^⊗∈ (Cat^⊗_∞) is the limit of a diagram q : I → (Cat^⊗_∞). Let (𝒞) be as in <cit.>[Def. 3.3.3.8], then we have a canonical equivalence (𝒞) ≃_i ∈ I(𝒞_i) where q(i) := 𝒞^⊗_i. The ∞-category of modules associated to a symmetric monoidal ∞-category 𝒞 is equivalent to the ∞-category Alg_𝐏𝐟^⊗(𝒞) of algebra objects associated to the ∞-operad 𝐏𝐟^⊗ (<cit.>). Thus we can realize (𝒞) as a full subcategory of the functor category Fun(𝐏𝐟^⊗,𝒞^⊗) spanned by objects p : 𝐏𝐟^⊗→𝒞^⊗ which commute with the usual projection maps to N(Fin_*). Firstly, we see that we have the following chain of equivalences : (𝐏𝐟^⊗,𝒞^⊗) ≃(𝐏𝐟^⊗,_i ∈ I𝒞_i^⊗) ≃_i ∈ I(𝐏𝐟^⊗,𝒞_i^⊗). In order to get the equivalence on the level of module categories, we are reduced to check if {p_i: 𝐏𝐟^⊗→𝒞_i^⊗}_i ∈ I is a compatible family of morphisms commuting to N(Fin_*), then the limit morphims p : 𝐏𝐟^⊗→𝒞^⊗ commutes with projection to N(Fin_*). This is because (Cat^⊗_∞) admits limits (<cit.>). Let be a Nis-loc stack and Λ an arbitrary commutative ring. Then there is a canonically defined object HΛ_∈(()) whose restriction along any morphism f: U → from a scheme U is canonically equivalent to HΛ_U ∈((U)). Consider the ring spectrum HΛ_B ∈((B)) constructed in <cit.> and <cit.>. We define HΛ_ to be f^* HΛ_B. Where f:→ B is the structure morphism. It follows directly from the definition of () <cit.>[Cor. 2.5.1] that f^* is a symmetric monoidal functor and thus HΛ_ is contained in (()). Then via <cit.>[Lem. 20] it follows that HΛ has the desired property. Let be a Nis-loc stack over B and Λ an arbitrary commutative ring. Then we have the following canonical equivalence _HΛ_ (()) ≃(, Λ). As is Nis-loc, for an atlas x: X → admitting Nisnevich-local sections, we have the following equivalence: () ≃( (X) ⇉(X ×_ X) ⋯). Applying <ref> to 𝒞= (), I = N(Δ), 𝒞_i= (X^i_), where X^i_:= X ×_ X ⋯_(i+1)-times X, we get the equivalence, (()) ≃_i∈Δ((X^i_)). Taking the fiber of the equivalence over the canonical Eilenberg-Maclane spectrum HΛ_≃_i ∈Δ HΛ_X^i_ (<ref>), we get that _HΛ_(()) ≅_i ∈ I_H(X^i_)((X^i_). By definition of on the level of schemes along with <ref>, we get that _HΛ_(()) ≃(,Λ) completing the proof. One could envision another construction of (, Λ) more along the lines of <cit.>[Def. 11.1.1]. That is consider the category of stable motivic complexes on a given Nis-loc stack. It would be interesting to compare this with <ref>. We will now explain how the six functor formalism for (-, Λ) on schemes generalizes to . (4-functors) The functor (- ,Λ) : ^→(_, Λ) has the following 4-functor formalism: * For every morphism f: → in we have a pair of adjoints (f^*, f_*), such that f^* is symmetric monoidal. * For every in there are functors - ⊗ - : (, Λ) ×(, Λ) →(, Λ) (-, -) : (, Λ) ^×(, Λ) →(, Λ) which form an adjoint pair (⊗, ). i.e. (, Λ) is a closed symmetric monoidal ∞-category. The existence of f^* and the fact that it is symmetric monoidal follows directly from <ref>. Moreoever since f^* is colimit preserving by Lurie's adjoint functor theorem <cit.>[Cor. 5.5.2.9] there exists a right adjoint which we denote by f_*. To see that (, Λ) is a closed symmetric monoidal ∞-category one can simply use the proof of <cit.>[Prop. 2.5.6] replacing with . The next proposition records the existence of f_# for smooth and representable morphisms and its properties under base change. This will be important for defining the category of geometric motives. Let f: → be a smooth representable morphism in . Then f^* admits a left adjoint f_#. Moreover for any cartesian square (TL) at (0 , 1.2) '; (TR) at (1.2, 1.2) '; (BL) at (0, 0); (BR) at (1.2,0); [->] (TL) – (TR) node [midway, above] f'; [->] (TL) – (BL)node [midway, left] g'; [->] (TR) – (BR)node [midway, right] g; [->] (BL) – (BR) node [midway, below] f; in , where f is smooth and representable and g is arbitrary, we have an equivalence g^*f_#≃ f'_# g'^*. The proof of <cit.>[Prop. 4.1.2] goes through in this situation with replaced with . (Tate twists) Let be a Nis-loc stack and p: 𝐆_m×→ be the projection. Let ℱ∈(, Λ), then the map induced by the counit p_# p^* ℱ[-1] →ℱ[-1] is a split monomorphism. We denote the complementary summand by ℱ (1). It follows by an easy descent argument that the functor ℱ↦ℱ(1) is invertible. In particular we have Tate twists for all n ∈𝐙. We now record the existence of exceptional functors, base change, projection formulas and proper pushforward formulas. For any locally of finite type morphism f: → of Nis-loc stacks there exist functors f_! : (, Λ) →(, Λ) f^! : (, Λ) →(, Λ) which form an adjoint pair (f_! , f^!) and satisfy: * (Projection formula) Let ℰ, ℰ' ∈(, Λ) and ℱ∈(, Λ) we have the following equivalences f_! (ℱ⊗ f^* (ℰ) ≃ f_!(ℱ)⊗ℰ f^! _(, Λ) (ℰ, ℰ') ≃ _(, Λ) (f^* ℰ, f^! ℰ'). * (Base change) If (TL) at (0 , 1.2) '; (TR) at (1.2, 1.2) '; (BL) at (0, 0); (BR) at (1.2,0); [->] (TL) – (TR) node [midway, above] f'; [->] (TL) – (BL)node [midway, left] g'; [->] (TR) – (BR)node [midway, right] g; [->] (BL) – (BR) node [midway, below] f; is a cartesian square in , of locally finite type morphisms. We have the following equivalences g'_! f'^* ≃ f^* g_! f'_* g'^! ≃ g^! f_* * (Proper pushforward) If f: → is proper and representable, then there exists a natural isomorphism α_f : f_! ≃ f_*. The proof of this will be contained in forthcoming work <cit.>. (Purity) The Nisnevich sheaf _ (- , Λ) on is oriented. Moreover we have * For any smooth representable morphism of relative dimension d, there is a natural isomorphism f^! ≃ f^* (d)[2d]. * For a closed immersion i: ↪ between regular Nis-loc stacks of codimension c we have i^! 1_≃ 1_ (-c)[-2d]. For the claim about the orientation we refer to reader to <cit.>[Rem. 1.7]. The proof of <cit.>[4.4.1] works in this case with replaced by . We note that the separated hypothesis is not needed in loc. cit. because on the level of B-schemes of finite type we already have the existence of the exceptional functors. We note that (2) is a direct consequence of (1). (Localization) Let be a Nis-loc stack. Suppose j: ↪ is an open immersion with closed complement i : ↪ then we have the following cofibers i_! i^! →𝕀→ j_* j^* j_! j^! →𝕀→ i_* i^*. The same proof as <cit.>[Prop. 4.2.1] works with replaced by (-, Λ). We note that in this case the inclusions i and j are representable thus the proof of <cit.>[Thm. 3.1.1] applied to (-, Λ) will also construct the exceptional functors. (Homotopy invariance) For any Nis-loc stack , the projection π: 𝐀^1_→ induces a fully-faithful functor π^* : (, Λ) →(𝐀^1_, Λ). The same proof of <cit.>[Prop. 4.2.2] works with replaced by . (Recollement) Suppose we have a diagram in j↪i↩:= - where j is an open immersion and i is a closed immersion. Then for (-, Λ) the following conditions are satisfied: * The functor i_* ≃ i_! admits a left adjoint i^* and a right adjoint i^!. * The functor j^* ≃ j^! admits a right adjoint j_* and a left adjoint j_!. * There is an equivalence j^*i_* ≃ 0. * We have the following localization triangles i_! i^! →𝕀→ j_* j^* j_! j^! →𝕀→ i_* i^*. * The functors i_*, j_* and j_! are all full embeddings. Both (1) and (2) follow <ref>, <ref> and <ref>. Claims (3) and (5) follow from the base change equivalence in <ref>. Finally (4) is <ref>. § DESCENT RESULTS In this section we discuss cdh-descent and Nisnevich descent for (-, Λ). Khan in <cit.> has shown that (-, Λ) when restricted to algebraic spaces satisfies cdh-descent. What we say in this section follows easily from Khan's work <cit.> but we will review the arguments here for completeness. For our purposes it will not be necessary to consider cdh-squares and Nisnevich squares for arbitrary morphisms of stacks, we will only need to consider representable cdh and Nisnevich squares. Recall from <cit.> that the constructible topology on AlgSp_B is the coarsest topology such that * The empty sieve covers the empty algebraic space. * If Z ↪ X is a closed immersion with open complement U ↪ X, { U ↪ X, Z ↪ X} generates a covering sieve. Let { f_i : U_i → S} be a constructible cover of S in AlgSp_B. Then the family of functors { f_i^* :(S, Λ) →(U_i, Λ) } is conservative This follows directly from <ref>. The following is <cit.>[Thm. 2.51] Suppose that cartesian square (TL) at (0 , 1.2) T; (TR) at (1.2, 1.2) Y; (BL) at (0, 0)Z; (BR) at (1.2,0)X; [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] g; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] i; (TL) at (0 , 1.2) T; (TR) at (1.2, 1.2) Y; (BL) at (0, 0)U; (BR) at (1.2,0)X; [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] (resp. g; [->] (TR) – (BR)node [midway, right] f ); [->] (BL) – (BR) node [midway, below] j; in AlgSp_B is a cdh-square (resp. Nisnevich square). Then we have a canonical equivalence (X, Λ) ≃(Z, Λ) ×_(T, Λ) (Z, Λ) (resp. (X, Λ) ≃(U, Λ) ×_(T, Λ) (Y, Λ)) of ∞-categories. The proof is the same as the proof <cit.>[Prop. 6.24]. We prove the cdh-statement the Nisnevich result is analogous. That is, by <cit.> it is enough to show * The pair (i^*, f^*) is conservative. * Given ℱ_Z ∈(Z, Λ), ℱ_Y ∈(Y, Λ), ℱ_T ∈(T, Λ) and g^* (ℱ_Z )≃ℱ_T ≃ k^*(ℱ_Y), if ℱ_X = i_* ℱ_Z ×_(fk)_* ℱ_T f_* ℱ_Y, then the maps i^* ℱ_X →ℱ_Z and f^* ℱ_X →ℱ_Y induced by the canonical projections are equivalences. Part (1) follows directly from <ref>. Part (2) follows from first noting that proper base change <ref> and the fact that i_* is fully faithful implies that i^*ℱ_X →ℱ_Z is an equivalence. To see the later equivalence we note that it follows via smooth base change <ref>. The next definitions are natural generalizations of the notions of cdh-square and Nisnevich square to . A cartesian diagram of algebraic stacks in (TL) at (0 , 1.2) ; (TR) at (1.2, 1.2) ; (BL) at (0, 0); (BR) at (1.2,0); [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] g; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] i; is called a representable cdh-square if: * The morphism f is representable proper and surjective . * The morphism i is a closed immersion. * The restriction of f to - is an isomorphism. We say that a cartesian diagram of algebraic stacks in (TL) at (0 , 1.2) ; (TR) at (1.2, 1.2) ; (BL) at (0, 0); (BR) at (1.2,0); [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] g; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] j; is called a representable Nisnevich square if: * The morphism f is representable étale morphism. * The morphism j is an open immersion. * The restriction of f to (-)_red is an isomorphism. Given a cdh-square (resp. Nisnevich square) (TL) at (0 , 1.2) ; (TR) at (1.2, 1.2) ; (BL) at (0, 0); (BR) at (1.2,0); [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] g; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] i; (TL) at (0 , 1.2) ; (TR) at (1.2, 1.2) ; (BL) at (0, 0); (BR) at (1.2,0); [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] (resp. g; [->] (TR) – (BR)node [midway, right] f ); [->] (BL) – (BR) node [midway, below] j; in . There is a canonical equivalence (, Λ) ≃(, Λ) ×_(, Λ) (, Λ) (resp. (, Λ) ≃(, Λ) ×_(, Λ) (, Λ)) of ∞-categories. We will only prove the cdh-case, the Nisnevich case is entirely analogous. Let π: X → be a Nis-loc atlas of . Then by <ref> there is an equivalence (, Λ) ≃_n ∈Δ(X_n, Λ). But now for each n ∈Δ the induced square (TL) at (0 , 1.2) T_n; (TR) at (1.2, 1.2) Y_n; (BL) at (0, 0)Z_n; (BR) at (1.2,0)X_n; [->] (TL) – (TR) node [midway, above] k_n; [->] (TL) – (BL)node [midway, left] g_n; [->] (TR) – (BR)node [midway, right] f_n; [->] (BL) – (BR) node [midway, below] i_n; is a cdh-square of algebraic spaces (resp. Nisnevich square). Thus by <ref> there is a canonical equivalence (X_n, Λ) ≃(Z_n, Λ) ×_(T_n, Λ)(Y_n, Λ). We can then rewrite (, Λ) as (, Λ) ≃_n ∈Δ(Z_n, Λ) ×_(T_n, Λ)(Y_n, Λ). But now since limits commute with fiber products we are done. Given cdh-square as in <ref>, and setting a ≃ ig ≃ fk. For any ℳ∈(, Λ) we can form a commutative square (TL) at (0 , 1.2) ℳ; (TR) at (2, 1.2) f_* f^* (ℳ); (BL) at (0, 0)i_* i^*(ℳ); (BR) at (2,0)a_* a^*( ℳ); [->] (TL) – (TR); [->] (TL) – (BL); [->] (TR) – (BR); [->] (BL) – (BR); by considering the various unit and counit natural transformation associated to the adjunctions involved. The square in <ref> is cartesian. We will also be interested in the case of a Nisnevich square as in <ref>. That is, again setting a ≃ jg ≃ fk, for any ℳ∈(, Λ) we can form the commutative square (TL) at (0 , 1.2) ℳ; (TR) at (2, 1.2) f_* f^* (ℳ); (BL) at (0, 0)j_* j^*(ℳ); (BR) at (2,0)a_* a^*( ℳ).; [->] (TL) – (TR); [->] (TL) – (BL); [->] (TR) – (BR); [->] (BL) – (BR); Applying the functor _(, Λ) (- , 1) we also get a square (TL) at (0 , 1.2) a_# a^*( ℳ); (TR) at (2, 1.2) f_# f^* (ℳ); (BL) at (0, 0)j_# j^*(ℳ); (BR) at (2,0)ℳ.; [->] (TL) – (TR); [->] (TL) – (BL); [->] (TR) – (BR); [->] (BL) – (BR); The square in <ref> is cartesian and hence so is <ref>. <ref> can be used to to show that for geometric motives we get a Mayer-Vietoris cofiber sequence. Suppose that we have a Nisnevich square as in <ref>, where all vertices are smooth and representable over some base Nis-loc stack . Then we have the following cofiber sequence ℳ_ () →ℳ_ () ⊕ℳ_ () →ℳ_ () [1]→ induced by the cartesian square <ref>. One takes ℳ := ℳ_ () in <ref>. The result then follows by smooth base change <ref>. We now explain a descent result for G-equivariant resolutions of singularites. The following result is a direct corollary of <cit.>[Thm. 8.1.2] of Abramovich, Temkin and Włodarczyk and will be critical for the main results of this paper. See also <cit.>[3.4]. Suppose k is of characteristic 0 and let G be a smooth group scheme acting on a reduced quasi-projective X. Then there exists a G-equivariant surjective projective birational morphism X̃→ X such that X̃ is regular over k. Consider the action map a: G × X → X and let X̃ be the resolution of X. Then by smooth functoriality of the resolution in <cit.>[Thm. 8.1.2] we see that we have a identification G × X≃ G ×_X X̃. But we also have the projection map p_X : G × X → X which is again smooth. Using <cit.>[Thm. 8.1.2] we have the identification G × X≃ G ×X̃. Putting both identifications together defines a group action ã: G ×X̃→X̃ such that the commutative diagram (TL) at (0 , 1.2) G ×X̃; (TR) at (2, 1.2) X̃; (BL) at (0, 0)G × X; (BR) at (2,0)X; [->] (TL) – (TR) node [midway, above] ã; [->] (TL) – (BL); [->] (TR) – (BR); [->] (BL) – (BR) node [midway, above] a; Cartesian. The claim follows. In particular <ref> implies that for a reduced algebraic stack = [X/G] of finite type over a field k of characteristic 0 there is a regular stack = [ X̃/ G] over k, together with a projective birational morphism →. Moreover we have the following: Given a finite type reduced algebraic stack = [X/G] over a field k of characteristic 0, there is a cdh-square (TL) at (0 , 1.2) ; (TR) at (1.2, 1.2) ; (BL) at (0, 0); (BR) at (1.2,0).; [->] (TL) – (TR) node [midway, above] k; [->] (TL) – (BL)node [midway, left] g; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] i; In particular for any ℳ∈_ () the associated square <ref> is cartesian. § GEOMETRIC MOTIVES For this section we will insist on the assumption that the base B = (k) has characteristic 0. Let X a finite type scheme over k. We have the following definition of the category geometric motives. Let _ (X, Λ) be the smallest full stable ∞-subcategory of (X, Λ) which is closed under retracts, and generated by objects { f_# (1_Z) (q) | f: Z→ X smooth, q ∈𝐙}. We call _ (X, Λ) The category of geometric motives over X. Our first observation is that we need not take all smooth morphisms over X when X quasi-projective in <ref>. In fact it is enough to take smooth morphisms which are quasi-projective. The category of geometric motives over a quasi-projective scheme X, can be equivalently described as the smallest full stable ∞-subcategory of (X, Λ) which is closed under retracts, and generated by objects { f_# (1_Z) (q) | f: Z→ X smooth and quasi-projective, q ∈𝐙}. First we note that in <ref> it is enough to consider smooth morphisms f:Z → X such that Z is quasi-projective over B. Indeed, we may cover Z by affines and then inductively use the Mayer-Vietoris sequence. Secondly, if X itself is quasi-projective then the claim follows because any smooth morphism between two quasi-projective schemes is itself quasi-projective. <ref> also holds for more generally for finite type schemes X. Indeed, For a general (reduced) finite type scheme X, we may stratify it by quasi-projective schemes and an induction argument together with the localization triangle suffices to show this. For our main application, we will be interested in stacks [X/G] which are quasi-projective over BG, this together with <ref> motivates our definition of geometric motives over a stack. Let be in , we define _ (, Λ) to be the smallest full stable subcategory of (, Λ) which is closed under retracts, and generated by objects { M_ ()(q) | smooth, representable and quasi-projective over , q ∈𝐙} where M_ () : = f_# (1_) for f: →. The perhaps more natural variant of <ref> where the quasi-projective condition is omitted could also be considered. We think it would be interesting to compare these two notions. Ultimately, we chose to use <ref> because it was easier to show that it was well-behaved under *-pushforwards by closed immersions. In this section we will first establish various results about when the ∞-subcategory _ (, Λ) of geometric motives in (, Λ), is preserved under some of the six operations. Once we have addressed this, we will use these results in the second part of the section to show that the category of geometric motives can be generated by a smaller subcategory whose mapping spectra have good connectedness properties. In particular we will see that in case is a quotient of a quasi-projective scheme by G over a field of characteristic 0, the category of geometric motives can be generated by the subcategory of so called Chow motives. Before we begin we would also like to point out that that while it is very natural to wonder if the objects in _ (, Λ) are compact in _ (, Λ), this is in general not the case. In fact, the example of <cit.>[Ex 4.1.12] in the setting of constructible sheaves works in our setting as well: Let = B 𝐆_m, we claim that 1_B 𝐆_m is not compact in (B𝐆_m, Λ). First by <cit.>[Thm. 2.2.10] we can identify π_0 _(B𝐆_m, Λ) (1, 1(*)[2*]) ≃^* (B 𝐆_m)_Λ≃Λ [x] where x is in degree 1. In particular x lifts to a map x: 1_B𝐆_m→ 1_B𝐆_m (1)[2]. in (B𝐆_m, Λ). We note that if we pull back x along the covering morphism π: pt→ B𝐆_m, we have that π^* x ≃ 0, because x corresponds to c_1 (𝒪 (1)). Thus if we pullback the filtered colimit ℳ := (1_B𝐆_mx→ 1_B𝐆_m (1)[2] x→ 1_B𝐆_m (2)[4] x→⋯) along π we see that π^* ℳ≃ 0, and since π is a smooth cover, it follows that π^* is conservative hence ℳ≃ 0. It follows that _(B𝐆_m, Λ) (1, ℳ) ≃ 0. On the other hand (_(B𝐆_m, Λ) (1, 1) →_(B𝐆_m, Λ) (1, 1(1) [2])→⋯) can be seen to be not equivalent to 0 by looking at for instance π_0 which is Λ[x, x^-1]. The upshot of <ref>, is that is shows that the geometric motives are in general not compact objects in (, Λ). §.§ The six operations and geometric motives For any f: → of stacks, the functor f^* restricts to a functor f^*:_ (, Λ) →_(, Λ) To prove the claim it is enough to check on generators ℳ_()(q) in _(, Λ) thus by considering the cartesian square cartesian square (TL) at (0 , 1.5) ×_; (TR) at (1.5, 1.5) ; (BL) at (0, 0); (BR) at (1.5,0); [->] (TL) – (TR) node [midway, above] g; [->] (TL) – (BL)node [midway, left] q; [->] (TR) – (BR)node [midway, right] p; [->] (BL) – (BR) node [midway, below] f; where ℳ_ () (q) = p_# 1_ (q), from <ref> it follows that f^*(ℳ_()(q)) ≃ f^*p_#1_ (q) ≃ q_#g^*1_ (q). If f: → is smooth and representable then the functor f_# restricts to a functor f_#:_ (, Λ) →_(, Λ). This follows from the fact that for a smooth representable g: →, we have that f_#∘ g_#≃ (f ∘ g)_#. If ℳ, N∈_ (, Λ) then so is ℳ⊗𝒩. The proof is the same as in <cit.>[4.2.3]. Suppose is a finite type Nis-loc stack and that there exists a Zariski cover = ⋃_i _i, then an object ℳ∈(, Λ) is in _(, Λ) if and only if M|__i is in _(_i, Λ). By arguing inductively it is enough to consider the case that = ∪. Via the Nisnevich square (TL) at (0 , 1.5) ×_; (TR) at (1.5, 1.5) ; (BL) at (0, 0); (BR) at (1.5,0),; [right hook->] (TL) – (TR) node [midway, above] j_'; [right hook->] (TL) – (BL)node [midway, left] j'_; [right hook->] (TR) – (BR)node [midway, right] j_; [right hook->] (BL) – (BR) node [midway, below] j_; for each ℳ∈(, Λ), write ℳ_ := j__# j^*_ℳ for = ×_, ,, we get a triangle of motives by <ref> ℳ_×_→ℳ_⊕ℳ_→ℳ[1]→ Then since ℳ_×_ and ℳ_⊕ℳ_ are contained in _ (, Λ) it follows that ℳ∈_(, Λ). For any stack and vector bundle over , tensoring by Th() and Th(-) preserve _(, Λ). This follows from the fact that (, Λ) is oriented i.e. - ⊗ Th() ≃ (n) [2n] . Let f: → be a smooth and proper representable morphism in . Then the functor f_* restricts to a functor f_* : _ (, Λ) →_ (, Λ). The corollary follows immediately from <ref>, the equivalence α_f: f_! ≃ f_* of <ref> (3) and purity <ref> together with <ref>. Our goal now is to show that for projective morphisms the lower-* functor preserves geometric objects. We will do this in two steps, first we will show that closed immersions have this property and then use the fact that for a general projective morphism we can factor it by closed immersion and a morphism from a projective space. Let i: ↪ be a closed immersion in and suppose that has the resolution property. Then the functor i_* restricts to a functor i_*: _ (, Λ) →_ (, Λ). Let f_0: _0 → be a smooth quasi-projective representable morphism over . First we assume that _0 is linearly fundamental in the sense of <cit.>[Def 2.7]. Now we apply <cit.>[Thm. 1.3] so that we have a cartesian square (TL) at (0 , 1.5) _0; (TR) at (1.5, 1.5) ; (BL) at (0, 0); (BR) at (1.5,0); [right hook ->] (TL) – (TR) node [midway, above] i'; [->] (TL) – (BL)node [midway, left] f_0; [->] (TR) – (BR)node [midway, right] f; [right hook ->] (BL) – (BR) node [midway, below] i; where f is smooth and representable. Let := -, and consider the localization diagram (TL) at (0 , 1.5) _0; (TM) at (1.5, 1.5) ; (TR) at (3, 1.5) _; (BL) at (0, 0); (BM) at (1.5,0); (BR) at (3, 0) ; [right hook ->] (TL) – (TM) node [midway, above] i'; [->] (TL) – (BL) node [midway, left] f_0; [->] (TM) – (BM) node [midway, right] f; [right hook ->] (BL) – (BM) node [midway, below] i; [left hook ->] (BR) – (BM) node [midway, below] j; [->] (TR) – (BR) node [midway, right] f_; [left hook ->] (TR) – (TM) node [midway, above] j'; which gives rise to a cofiber sequence ℳ_ (_) →ℳ_ () → i_* ℳ_ (_0) [1]→ . Since both ℳ_ (_) and ℳ_ () are geometric this implies that i_* ℳ_ (_0) is geometric which is what we wanted to show. In the general case, for a smooth quasi-projective morphism f: _0 →, since has the resolution property, we know that _0 ≃ [Z_0 /GL_n] for some quasi-projective Z_0. Now applying the G-equivariant Jouanolou's trick <cit.>[Prop. 2.20] we get an affine bundle π: _0 →_0 where _0 is linearly fundamental. But since π^* is fully-faithful the counit π_#π^* ≃→𝕀 is an equivalence and thus ℳ_ (_0) ≃ f_0, #π_#π^* (1__0) ≃ℳ_ (_0). But since _0 is linearly fundamental by the previous case i_* ℳ_ (_0) ∈_ (, Λ) and the claim now follows. Let ι: ↪ be a closed immersion in . Then the functor i_* restricts to a functor ι_*: _ (, Λ) →_ (, Λ). By reduced invariance we may assume that is reduced. Thus, by <cit.> there is a stratification of be stacks with the resolution property. We proceed by induction on the length of the stratification. In the case that itself has the resolution property it follows that since ι :→ is a closed immersion, also has the resolution property and the claim follows by <ref>. In the general case assume that we have a stratification of length n: ∅ = _0 ↪_1 ↪⋯↪_n = Let : = - _n-1 and consider the localization square (TL) at (0 , 1.5) _n-1; (TM) at (1.5, 1.5) ; (TR) at (3, 1.5) _; (BL) at (0, 0)_n-1; (BM) at (1.5,0); (BR) at (3, 0) .; [right hook ->] (TL) – (TM) node [midway, above] i'; [->] (TL) – (BL) node [midway, left] ι_n-1; [->] (TM) – (BM) node [midway, right] ι; [right hook ->] (BL) – (BM) node [midway, below] i; [left hook ->] (BR) – (BM) node [midway, below] j; [->] (TR) – (BR) node [midway, right] ι_; [left hook ->] (TR) – (TM) node [midway, above] j'; For ℳ∈_ (, Λ) we have a cofiber sequence j_#ι_, * j'^* (ℳ) →ι_* (ℳ) → i_* ι_n-1, * i'^* (ℳ) [1]→. We now observe that j_# i_, * j'^* (ℳ) ∈_ (, Λ) by <ref>, <ref> and <ref> and i_* ι_n-1, * i'^* (ℳ) ∈_ (, Λ) by <ref>, induction and, <ref>. Thus ι_* ℳ∈_ (, Λ). Let f: → be a projective morphism in . Suppose that has the resolution property. Then the functor f_* restricts to a functor f_* : _ (, Λ) →_ (, Λ). Since has the resolution property we may factor f:→ as ι→p→ where ι is a closed immersion and p is a smooth representable proper morphism. The claim now follows from <ref> and <ref>. Let f: → be a projective morphism in . Then the functor f_* restricts to a functor f_* : _ (, Λ) →_ (, Λ). By reduced invariance we may assume that is reduced. Thus, by <cit.> there is a stratification of be stacks with the resolution property. We proceed by induction on the length of the stratification. In the case that has the resolution property we are done by <ref>. In the general case assume that we have a stratification of length n: ∅ = _0 ↪_1 ↪⋯↪_n = Let : = - _n-1 and consider the localization square (TL) at (0 , 1.5) _n-1; (TM) at (1.5, 1.5) ; (TR) at (3, 1.5) _; (BL) at (0, 0)_n-1; (BM) at (1.5,0); (BR) at (3, 0) .; [right hook ->] (TL) – (TM) node [midway, above] i'; [->] (TL) – (BL) node [midway, left] f_n-1; [->] (TM) – (BM) node [midway, right] f; [right hook ->] (BL) – (BM) node [midway, below] i; [left hook ->] (BR) – (BM) node [midway, below] j; [->] (TR) – (BR) node [midway, right] f_; [left hook ->] (TR) – (TM) node [midway, above] j'; For ℳ∈_ (, Λ) we have a cofiber sequence j_# f_, * j'^* (ℳ) → f_* (ℳ) → i_* f_n-1, * i'^* (ℳ) [1]→. Now, j_# f_, * j'^* (ℳ) ∈_ (, Λ) by <ref> and i_* f_n-1, * i'^* (ℳ) ∈_ (, Λ) by induction. Thus we see that f_* (ℳ) ∈_ (, Λ) which is what we wanted to show. With an appropriate form of Chow's lemma for stacks, one can extend <ref> to proper representable morphisms. Suppose f: → is quasi-projective morphism in . Then the functor f_! restricts to a functor f_! : _(, Λ) →_(, Λ). Since f: → is quasi-projective, we may factor f as j↪Pp→ where j is an open immersion and p is projective. The result now follows from <ref> and <ref>. §.§ Generation results for the derived category of geometric motives For an ∈ we define the additive ∞-category of Chow motives (, Λ) ⊆ (, Λ) to be smallest additive ∞-category generated by { f_! 1_(q)[2q]: smooth over B , f:→ projective, q ∈𝐙}. and retracts thereof. In this section we wish to prove that () generates _ (, Λ) under finite limits, colimits and retracts. i.e. that the smallest thick subcategory of _(, Λ) containing () is _(, Λ). For the readers familiar with <cit.> we remark that our strategy was inspired by and will follow closely that of <cit.>[4.4.3]. We start first with an elementary lemma about stacks, which will be useful for induction arguments. Let be a finite type algebraic stack over B. Suppose that ⊆ is a dense open substack of . Let denote the complement of in Then () < (). Consider the diagram of cartesian squares where Π is a smooth cover (TL) at (0 , 1.5) U; (TM) at (2, 1.5) X; (TR) at (4, 1.5) Z; (BL) at (0, 0) ; (BM) at (2,0) ; (BR) at (4, 0) .; [->] (TL) – (TM) node [midway, above] j'; [->] (TR) – (TM) node [midway, above] i'; [->] (BL) – (BM) node [midway, below] j; [->] (BR) – (BM) node [midway, below] i; [->] (TM) – (BM) node [midway, right] π; [->] (TL) – (BL)node [midway, left] π_; [->] (TR) – (BR)node [midway, right] π_; It follows that since the map π is a continuous and surjective on underlying topological spaces that U is a dense open subscheme of X with complement Z. We will be finished if we show for each z ∈ || that _z () < _z () We are free to pick any lift of the point z to Z. In particular there exists a lift z̃ such that _z̃ (Z) = (Z) and we have that _z () = (Z) - (R_,z). Similarly we may pick any lift of z in X such that _z () = (X) - (R_,z). For each z ∈ || we have a canonical equivlance R_,z≃ R_z where R : = X ×_ X and R_ is the restriction to . But since U is dense in X with complement Z we have that (Z) < (X) and hence _z () < _z (), which is what we wanted to show. Suppose that f:→ is representable and separated and has the resolution property. Then f_! restricts to a functor f_!: _(, Λ) →_ (, Λ) Since has the resolution property it is of the form [Y/GL_n] where Y is quasi-affine. Since f:→ is representable it follows that ≃ [X/GL_n] for X an algebraic space. By reduced invariance we may assume that is reduced. The stack has affine stabilizers is finite type and quasi-separated, and by <cit.>[Prop. 2.6] there exists a stratification of by global quotient stacks which are quasi-projective over BGL_n. We will use induction on the length of the stratification. In the trivial case when is quasi-projective over BGL_n then f:→ is quasi-projective and we are done by <ref>. For a stratification of length n ∅= _0 ↪_1 ↪⋯↪_n = we consider the diagram _n-1i↪j↩ of stacks over . Since and are quasi-projective over BGL_n, it follows that the induced map f|_: → is quasi-projective. Let ℳ∈_ (, Λ) and consider the localization triangle induced by <ref> j_! j^! (ℳ) →ℳ→ i_* i^* ℳ[1]→. Since f_! is an exact functor between stable ∞-categories we get a cofiber sequence f_ ,! j^* (ℳ) → f_! ℳ→ f_n-1, ! i^* ℳ[1]→. Now,<ref> implies that f_ ,! j^* (ℳ) ∈_gm (, Λ) and induction implies that f_n-1, ! i^* ℳ∈_ (, Λ). Thus f_! (ℳ) ∈_ (, Λ) finishing the argument. Recall, that a full subcategory 𝒟 of a stable ∞-category 𝒞 is called thick if it is closed under taking retracts (see <cit.> [4.4.5] for a discussion on retracts and idempotents in the setting of ∞-categories). Let be a finite type Nis-loc stack with affine stabilizers. The category _() is the smallest thick stable ∞-subcategory of (, Λ) generated by the collection of objects 𝒫():= { f_! (1_' (n)) | f: ' → is projective and n ∈𝐙}. Let _proj() be the smallest thick subcategory generated by 𝒫(). By <ref> it follows that _proj() ⊂_(). So we prove the reverse inclusion. For any quasi-projective smooth morphism f: ' → it follows from purity that f_# agrees with f_! up to a Tate twist. Thus it is enough to prove that f_! 1_' for any such f is contained in _proj(). When has the resolution property we are finished by <ref>. In the general case, we can argue by induction on the length of the stratification of by stacks with the resolution property. Note that we may assume that is reduced by reduced invariance. That is for a length n stratification ∅= _0 ↪_1 ↪⋯↪_n = we consider the diagram (TL) at (0 , 1.5) '_n-1; (TM) at (1.8, 1.5) '; (TR) at (3.6, 1.5) '; (BL) at (0, 0) _n-1; (BM) at (1.8,0) ; (BR) at (3.6, 0) ; [->] (TL) – (TM) node [midway, above] i'; [->] (TR) – (TM) node [midway, above] j'; [->] (BL) – (BM) node [midway, below] i; [->] (BR) – (BM) node [midway, below] j; [->] (TM) – (BM) node [midway, right] f; [->] (TL) – (BL)node [midway, left] f_n-1; [->] (TR) – (BR)node [midway, right] f_; of cartesian squares where i is a closed immersion and j is an open immersion. By considering the localization triangle from <ref> j_! j^! f_! (1_') → f_! (1_') → i_* i^* f_! (1_') it follows from the base change isomorphisms of <ref>, <ref>, and <ref> that both j_! j^! f_! (1_X') and i_* i^* f_! (1_X') are contained in _(, Λ). Hence we see that f_! (1_') is contained in _(, Λ). The next result says that when is a Nis-loc stack which is quasi-projective over BG then _ (, Λ) is the thick closure of (, Λ). Suppose that = [X/G] where X is a quasi-projective scheme over B and G is affine algebraic group. Then the category (, Λ) generates _ (, Λ) under finite limits, colimits and retracts. Let _ (, Λ) be the smallest thick subcategory of (, Λ) which contains the category (, Λ). We must show that _ (, Λ) is precisely all of _ (, Λ). To this end by <ref>, it will be enough to show that 𝒫() is contained in in _ (, Λ). Consider a projective morphism f: ' →. From our hypothesis on we may take ' ≃ [X'/G] where X' is projective over X, moreover without loss of generality we may assume that ' is reduced. We now proceed by induction on the relative dimension of ' → BG the claim is clear when relative dimension is 0, so we assume it holds for some n >0 and consider ' → BG with relative dimension n+1. We may apply equivariant resolutions of singularities over k, <ref>, to X'. Thus after taking stack quotients by G we arrive at a projective birational morphism [X̃'̃ / G] → [X/G]. The stack [X̃'̃ / G] is smooth over k. Now we consider the cartesian squares (TL) at (0 , 1.5) '; (TM) at (2, 1.5) [X̃'̃ / G]; (TR) at (4,1.5) '; (BL) at (0, 0) ; (BM) at (2, 0) [X'/G]; (BR) at (4,0) ; [left hook->] (BR) – (BM); [left hook->] (TR) – (TM); [->] (TL) – (BL); [->] (TM) – (BM); [->] (TL) – (TM); [->] (BL) – (BM); [->] (TR) – (BR); where is dense open, the right hand side is a cdh-square. In particular since the relative dimension of → BG is strictly less then the relative dimension of ' over BG and we may apply induction hypothesis together with the cofiber sequence induced by <ref>. § MAPPING SPECTRA AND CHOW GROUPS In this section we will study the mapping spectra in (, Λ). Our main goal will be to show that the mapping spectra of (, Λ) are connective but along the way we will identify the Borel-Moore homology of a quotient stack with the equivariant higher Chow groups. In this section we will take k to be a field of arbitrary characteristic, also in many proofs we will suppress the notation for the coefficient ring and often write () with the hope of making things easier to read. Let X be a quasi-projective scheme over B of dimension n equipped with an action of an affine algebraic group G and integers s, t ∈𝐙. We fix a Totaro gadget (U ⊂ V) where V is a G-representation and j: U ⊂ V an open subscheme on which G acts freely and such that the reduced complement ι : Z ↪ V satisfies c:= codim_V Z > n - s and such that the quotient (U × X) / G exists as a scheme. Let l : = V and g : = G. Then one can define the equivariant higher Chow groups as ^G_s(X, t) : = _s + l - g ((U × X) / G, t). For the stack associated stack := [X/G] we define the (higher) Chow groups of as _s (, t) : = ^G_s+g(X, t) = _s + l ((U × X) / G, t). Note that from this definition of we automatically have _s (, t) = 0 for s > () = n - g. One checks this definition is well defined in the same way checks the definition of equivariant Chow groups of Edidin-Graham is well defined <cit.>. The next proposition is probably well known and its proof follows a standard way of arguing <cit.>[12.4] and <cit.>[Thm. 2.2.10]. We include it here because it will serve as a warm up for <ref>. Suppose = [X/G] and the integers s, t ∈𝐙 as in the discussion above. Let f: → B be the structure map. We have the following equivalence π_0 _(, Λ) (1_ (s)[2s+t], f^!1_B) ≃_s(, t)_Λ. First we choose an embedding G ↪ GL_r. Fix a Totaro gadget (U,V) with codim_V(Z) > n - s + r^2 -g . Let p: : = [V/G] → BG the induced vector bundle over BG and p_ is base change to . Then by homotopy invariance we have that p^* is fully-faithful. Thus we have an equivalence _() (1 (s)[2s+t], f^!1_B) ≃_(×_BG) (1 (s)[2s+t], p_^*f^!1_B). Let j̅ : := [U/G] → be the map induced by the open immersion j: U → V and j̅_ its base change to . We claim that induced morphism _(×_BG) (1 (s)[2s+t], p_^*f^!1_B) j̅^*_→_(×_BG) (1 (s)[2s+t], j̅_^* p_^*f^!1_B) is an equivalence. Since p_ is smooth, separated and representable, we map apply the purity isomorphism p_ ^! ≃ p^* (l) [2l] which gives _(×_BG) (1 (s+l)[2s+2l+t], p_ ^!f^!1_B) j̅_→_(×_BG) (1 (s+l)[2s+2l+t], j̅_^! p_^!f^!1_B). Writing π:×_BG→ B and σ : ×_BG→ B for the structure maps we can rewrite this is as _(×_BG) (1 (s+l)[2s+2l+t], π^! 1_B) j̅_→_(×_BG) (1 (s+l)[2s+2l+t], σ^! 1_B). To see that <ref> is an equivalence via the localization triangle i_* i^! →𝕀→ j_* j^!, we are reduced to showing that π_0 _(×_BG) (1 (s+l)[r], ι̅^!π^! 1_B) = 0 for all r ∈𝐙. As in <cit.>[Rem. 2.3.7] we may find a Nis-loc atlas: W → (X × Z) ×^G GL_r →×_BG where W is a scheme and the first arrow is an étale surjection. Since we can compute (×_BG) along Čech nerves of Nis-loc atlases it will be enough to show that π_0 _(×_BG) (1 (s+l)[r], ι̅^!π^! 1_B) vanishes on the !-restriction to each term W^a : = W ×_×_BG⋯×_×_BG W_a for a ≥ 0 in the Čech nerve of W →×_BG. Writing η^!_a : (×_BG) →(W^a) for the !-restriction in the Čech nerve, the purity isomorphism gives η^!_a ≃η^*_a (aγ)[2aγ] where γ is the relative dimension of the Nis-loc atlas W →×_BG. We also have η_a^! ι̅^! π^! ≃ h_a^! where h_a :W^a → B the structure map. We are reduced to showing π_0 _(W^a ) (1 (s+l+aγ)[r+2aγ], h_a^! 1_B) = 0 for all a ≥ 0. But now we are in the realm of finite type schemes over a field and we know that π_0 _(W^a ) (1 (s+l+aγ)[r+2aγ], h_a^! 1_B) = _s+l+aγ(W^a, r-s-l) and by our choice of Totaro gadget we have that l + s > n + (Z) +r^2 - g and thus l + s + aγ > n+ (Z) + aγ. In particular because the Chow groups vanish, we conclude that π_0 _(W^a ) (1 (s+l+aγ)[r+2aγ], h_a^! 1_B) = 0 which is what we wanted to show. The next theorem will be important in establishing the weight structure on (, Λ), we will use the symbol to refer to the mapping spectra as opposed to the symbol which denotes the mapping space. Suppose that = [S/G] where S is a finite type scheme over B and G is an affine algebraic group. Let and be smooth stacks over B and projective over and j, m , n ∈𝐙 and let d_ be the dimension of over B. π_j _(, Λ) (f_!1_ (m)[2m], g_!1_ (n) [2n]) ≃_d_ -n +m (×_, j), in particular the mapping spectrum _(, Λ) (f_!1_ (m)[2m], g_!1_ (n) [2n]) is connective. First we fix an embedding G ↪ GL_r and we fix a Totaro gadget (U,V) for G so that c:= codim_V (Z) > (X) - (S) +n - m +r^2 - g. Consider the cartesian squares (TL) at (0 , 1.5) ×_BG; (TR) at (1.8, 1.5) ; (BL) at (0, 0)×_BG; (BR) at (1.8,0); (TLL) at (4 , 1.5) ×_BG; (TRR) at (5.8, 1.5) ; (BLL) at (4, 0)×_BG; (BRR) at (5.8,0).; [->] (TLL) – (TRR) node [midway, above] p_; [->] (TLL) – (BLL)node [midway, left] g_; [->] (TRR) – (BRR)node [midway, right] g; [->] (BLL) – (BRR) node [midway, below] p; [->] (TL) – (TR) node [midway, above] p_; [->] (TL) – (BL)node [midway, left] f_; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] p; Combined with base change these give the following equivalences p^* f_!1_≃f__! p_^*1_≃f__! 1_×_BG p^* g_!1_≃g__! p_^*1_≃g__! 1_×_BG. Since p is a vector bundle over it follows by homotopy invariance that p^* is fully-faithful which when combined with <ref> gives an equivalence _() (f_!1_ (m)[2m+j], g_!1_ (n) [2n]) p^*≃_(×_BG) (f__!1 (m)[2m+j], g__!1(n) [2n]). Let j̅: ×_BG→×_BG be the open immersion induced by U ⊂ V. Via the cartesian diagrams (TL) at (0 , 1.5) ×_BG; (TR) at (1.8, 1.5) ; (BL) at (0, 0)×_BG; (BR) at (1.8,0); (TLL) at (4 , 1.5) ×_BG; (TRR) at (5.8, 1.5) ; (BLL) at (4, 0)×_BG; (BRR) at (5.8,0); [->] (TLL) – (TRR) node [midway, above] j̅_; [->] (TLL) – (BLL)node [midway, left] g_; [->] (TRR) – (BRR)node [midway, right] g; [->] (BLL) – (BRR) node [midway, below] j̅; [->] (TL) – (TR) node [midway, above] j̅_; [->] (TL) – (BL)node [midway, left] f_; [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] j̅; we get the equivalences j̅^* f_!1_≃f__! j̅_ ^*1_≃f__! 1_×_BG j̅^* g_!1_≃g__! j̅_ ^*1_≃g__! 1_×_BG. Composing j̅^* with p^* gives a map _() (f_!1_ (m)[2m +j], g_!1_ (n) [2n]) j̅^*p^*→_(×_BG) (f__!1 (m)[2m+j], g__!1(n) [2n]). we claim that <ref> is an equivalence. From considering the localization triangle ι̅_* ι̅^! →𝕀→j̅_* j̅^!, we simply need to show that _(×_BG) (f__!1 (m)[2m+j], ι̅_*ι̅^!g__!1(n) [2n]) ≃ 0. Thus it will be enough to prove the following: π_0 _(×_BG) (f__!1, ι̅^!g_V_!1 (n-m) [r]) ≃ 0 for all r ∈𝐙. Via standard arguments we can reduce to the situation where Z is regular, in which case we have by absolute purity ι^* ≃ι^!(c)[2c]. Via the diagram of cartesian square (TL) at (0 , 1.5) ×_BG; (TM) at (2, 1.5) ×_BG; (TR) at (4, 1.5) ; (BL) at (0, 0) ×_BG; (BM) at (2,0) ×_BG; (BR) at (4, 0) ; [->] (TL) – (TM) node [midway, above] ι̅_; [->] (TM) – (TR) node [midway, above] p_; [->] (BL) – (BM) node [midway, below] ι̅; [->] (BM) – (BR) node [midway, below] p; [->] (TM) – (BM) node [midway, right] g_; [->] (TL) – (BL)node [midway, left] g_; [->] (TR) – (BR)node [midway, right] g; we have the base change equivalence ι̅^! g__! ≃g__!ι̅^!_. which when combined with absolute purity for ι allows us to rewrite the <ref> as π_0 _(×_BG) (f__!1, g__!1 (n-m-c) [r -2c]) ≃ 0. Since ×_BG≃ [ (Z × S)/G], by <cit.>[Rem. 2.3.7] we have a Nis-loc atlas W →×_BG. Hence we can compute DM(×_BG) via the Čech nerve ⋯ W^2: = W ×_×_BG W ⇉ W →×_BG. Thus it is enough to show <ref> on the restriction to each (W^q). We write π^*_q for the restiction (×_BG)→(W^q). Then we can write the mapping space _(W^q) (π_q^* f__!1, π_q^* g__!1 (n-m-c) [r -2c]), as _(W^q) (f__q _!1, g__q_!1 (n-m-c) [r -2c]), where f__q : W_ ^q → W^q (resp. g__q) is the base change of f_ along the map π_q: W^q →×_BG. We must show π_0 _(W^q) (f__q _!1, g__q_!1 (n-m-c) [r -2c]) = 0. Via <cit.>[Lem. 2.37] we have the equivalence π_0 _(W^q) (f__q _!1, g__q_!1 (n-m-c) [r -2c]) ≃ H^BM_2d_q -r +2c,d_q -n + m +c (W^q_×_W^q W^q_) where d_q := (Y) + (Z) + r^2 - g + qγ. Now comparing with the Chow groups H^BM_2d_q -r +2c,d -n + m +c (W^q_×_W^q W^q_) ≃_d_q - n + m+ c (W^q_×_W^q W^q_, -r+2n-2m-c) we see that d_q - n +m +c > (W^q_×_W^q W^q_). thus these groups vanish proving <ref>. To see how the main result follows from <ref>, note that it's consequence is the equivalence _() (f_!1_ (m)[2m +j], g_!1_ (n) [2n]) j̅^*p^*→_(×_BG) (f__!1 (m)[2m+j], g__!1(n) [2n]). Now we note that the stack ×_BG×_ is equivalent to ×_BG×_×_BG×_BG, which means that it is a scheme. Following the arguments of <cit.>[Lem. 2.37] we can identify the right hand side of <ref> with π_0 _(×_BG×_) (1(l + () + m -n)[2m-2n+2l +2() +j], a^! 1_B) via base change and purity, where a:×_BG×_) → B is the structure morphism. We now see by <ref> that this is just _l +d_ - n + m (×_BG×_, j) ≃ _d_ - n + m(×_ , j) which is what we wanted to show. § WEIGHT STRUCTURES We first remind the reader of the definition of a weight structure on the stable ∞-category. A weight structre on a stable ∞-category 𝒞, is the datat of two retract closed subcategories (𝒞_w ≥ 0 , 𝒞_w ≤ 0) such that: * Σ𝒞_w ≥ 0⊂𝒞_w ≥ 0 , Ω𝒞_w ≤ 0⊂𝒞_w ≤ 0. We write 𝒞_w ≥ n : = Σ^n 𝒞_w ≥ 0, 𝒞_w ≤ n : = Ω^n 𝒞_w ≤ 0 * If x ∈𝒞_w ≤ 0, y ∈𝒞_w ≥ 1 then π_0 (x, y ) ≃ 0. * For any x ∈𝒞 we have a cofiber sequence x_≤ 0→ x → x_≥ 1. with x_≤ 0∈𝒞_w ≤ 0 and x_≥ 1∈𝒞_w ≥ 1, called the weight truncations of x. We say that a weight structure is bounded if 𝒞 = ⋃_n ∈𝐙 (𝒞_w ≥ -n∩𝒞_w ≤ n). We also define the weight heart of weight structure to be 𝒞^_w : = 𝒞_w ≥ 0∩𝒞_w ≤ 0. Next we state a theorem due to Bondarko <cit.>[4.3.2.II], but see also Hébert <cit.>[Thm 1.9]. (see also <cit.>[Rem. 2.2.6] for the ∞-categorical version, which we state here) (Bondarko) Let 𝒞 be a stable ∞-category. Assume we are given a full subcategory ℬ⊂𝒞 such that * ℬ generates 𝒞 under finite limits, finite colimits and retracts. * ℬ has connective mapping spectra. Then we may define the following subcategories 𝒞_w ≥ 0 = {retracts of finite colimits of objects of ℬ} and 𝒞_w ≤ 0 = {retracts of finite limits of objects of ℬ}. These subcategories give a bounded weight structure on 𝒞 whose heart is the minimal retract-closed additive subcategory containing ℬ. Let = [X/G] where X is a quasi-projective scheme over a field k of characteristic 0 and G is an affine algebraic group. * The ∞-cateegory _(, Λ) admits a bounded weight structure, with _(, Λ)^_w≃(, Λ). * The ∞-category _ (, Λ) admits a weight structure which restricts to the weight structure on _ (, Λ) constructed in (1). For the first claim simply have to verify the conditions <ref>. In the notation of that theorem we take 𝒞 := _ (, Λ) ℬ : = (, Λ). Condition (1) follows from <ref> and condition (2) follows from <ref>. For the second claim we can use <cit.> [Prop. 1.4.2 (9)] to finish the argument. § EQUIVARIANT MOTIVES In this final section we identify the homotopy category of Chow motives (, Λ) with the natural generalization of both Laterveer's category of G-equivariant Chow motives <cit.> as well as Corti and Hanamura's category of Chow motives over a general base <cit.> when is a global quotient stack. That is to say when is BG our identification will show that (, 𝐐) is equivalent to Laterveer's original category. Let = [S/G] where S is quasi-projective over B : = (k) and G is an affine algebraic group over B. Suppose that , are smooth over B and projective over . Then following <cit.> we define the set of correspondences of degree r between and as follows: Let = ∐_i_i with _i irreducible components then _r (, ) : = ⊕_i__i +r ( ×__i)_Λ. We can construct a composition of correspondences ∘: _r (, ) ⊗_s (, ) →_r+s (, ) by considering the diagram (TL) at (0 , 1.5) ×_×_; (TR) at (5, 1.5) ×_××_; (BL) at (0, 0); (BR) at (5,0)×; [->] (TL) – (TR) ; [->] (TL) – (BL); [->] (TR) – (BR); [->] (BL) – (BR) node [midway, below] δ; which allows us to define α∘β := p__*( δ^! (α×β)), where δ: →× and p_, : ×_×_→×_ the projection. We note that identity 𝕀∈_0 (, ) is given by the scheme theoretic image of Δ : →×_. Let (, Λ) denote the classical category Chow motives. The objects of this category are triples (, p, m) where is smooth and projective over , p is an idempotent in _0(,) and m ∈𝐙. The morphism sets are defined as ((, p, m), (, q, n)) : = q ∘_m-n (,) ∘ p ⊆_m-n (,). When = BG, the category (, 𝐐) is equivalent to Laterveer's category of G-equivariant motives <cit.>. In particular, to get the equivalence one must re-index because in loc. cit. Chow cohomology is used and in our situation since we are working over a not necessarily smooth stack we must use Chow homology. The category (, Λ) is an additive, idempotent complete and symmetric monoidal where the tensor product is defined as (, p, m) ⊗ (, q, n) : = (×_, p × q, m+n). The proof follows by combining the arguments of <cit.>[0FGF], <cit.>[0FGB] and <cit.>[0FGC]. We will now construct a functor F: (, Λ) →(, Λ). Since (, Λ) is the idempotent completion of the full sub-category (, Λ)' spanned by pairs (, 𝕀, m), and (, Λ) is idempotent complete it is enough to construct a functor F' : (, Λ)' →(, Λ). On objects this functor sends (, m) to f_!1_ (m)[2m] ∈(, Λ). To describe what this functor does on morphisms we consider the isomorphism ϵ_,: π_0 _(, Λ) (f_! 1_ (m)[2m], g_! 1_ (n)[2n]) ≃_d_ -n+m (×_). constructed in the proof of <ref>. We define for α∈_m-n (, ) F' (α) := ϵ_, ^-1 (α). In order to verify that F' is a functor we need to check that F'(α∘β) = F'(α) ∘ F'(β). That is F' is natural with respect to composition of cycles. The following is a version of <cit.>[Prop. 2.39] for stacks. Let f: → , f:→, h:→ be projective morphisms, where , and are smooth over B and let α: f_!1_→ g_! 1_ and β: g_! 1_→ h_! 1_. Consider the cartesian diagram (TL) at (0 , 1.5) ×_×_; (TR) at (5, 1.5) ×_××_; (BL) at (0, 0); (BR) at (5,0)×; [->] (TL) – (TR) ; [->] (TL) – (BL); [->] (TR) – (BR)node [midway, right] f; [->] (BL) – (BR) node [midway, below] δ; Then we have the following equality ϵ_,(β∘α) = π_* δ^! (ϵ_,(α) ∘ϵ_, (β)) where π: ×_×_→×_ is the projection. We fix a Totaro gadget U ⊂ V and let q: U/G → B. Then as in the proof of <ref> the functor q^* = j̅^* p^* induces natural isomorphisms π_0 _(, Λ) (f_! 1, g_! 1) q^*→π_0 _(×_BG, Λ) (f__!1, g__! 1 ) π_0 _(, Λ) (g_! 1, h_! 1) q^*→π_0 _(×_BG, Λ) (g__!1, h__! 1 ). The result now follows from <cit.>[Prop. 2.39], applied to f_: ×_BG→×_BG, g_: ×_BG→×_BG, h_: ×_BG→×_BG. The map F' : (, Λ)' →(, Λ) is a functor. This follows directly from <ref> together with <cit.>[Props. 3.11, 3.15, 3.16]. Now by the universal property of idempotent completion we get a well defined functor F: (, Λ) →(, Λ) The functor F, <ref>, is an equivalence of categories F: (, Λ) ≃→(, Λ) Fully-faithfullness is clear. To see that it is essentially surjective simply note that F is an additive functor and every generator of (BG) is contained in its essential image. Let be a Nis-loc stack over a field of characteristic 0. Then the homotopy category of the heart of the weight structure constructed in <ref> for _(, Λ) can be identified with (, Λ). That is (, Λ) ≃_ (, Λ)^_w . For an algebraic group G, the category h(BG, 𝐐) is equivalent to the category of G-equivariant chow motives of Laterveer. This is just <ref> combined with <ref>. amsalpha
http://arxiv.org/abs/2306.09067v1
20230615114944
Winning Solution for the CVPR2023 Visual Anomaly and Novelty Detection Challenge: Multimodal Prompting for Data-centric Anomaly Detection
[ "Yunkang Cao", "Xiaohao Xu", "Chen Sun", "Yuqi Cheng", "Liang Gao", "Weiming Shen" ]
cs.CV
[ "cs.CV" ]
Competitive effects between gravitational radiation and mass variation for two-body systems in circular orbits Cyril Renevey July 31, 2023 ============================================================================================================== [1]Equal Contribution. [4]Corresponding Author. This technical report introduces the winning solution of the team Segment Any Anomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge. Going beyond uni-modal prompt, e.g., language prompt, we present a novel framework, i.e., Segment Any Anomaly + (SAA+), for zero-shot anomaly segmentation with multi-modal prompts for the regularization of cascaded modern foundation models. Inspired by the great zero-shot generalization ability of foundation models like Segment Anything, we first explore their assembly (SAA) to leverage diverse multi-modal prior knowledge for anomaly localization. Subsequently, we further introduce multimodal prompts (SAA+) derived from domain expert knowledge and target image context to enable the non-parameter adaptation of foundation models to anomaly segmentation. The proposed SAA+ model achieves state-of-the-art performance on several anomaly segmentation benchmarks, including VisA and MVTec-AD, in the zero-shot setting. We will release the code of our winning solution for the CVPR2023 VAND challenge at Segment-Any-Anomalyhttps://github.com/caoyunkang/Segment-Any-Anomaly [The extended-version paper with more details is available at  <cit.>.] § INTRODUCTION Anomaly segmentation <cit.> have gained great popularity in industrial quality control <cit.>, medical diagnoses <cit.>, etc. We focus on the setting of zero-shot anomaly segmentation (ZSAS) on images, which aims at utilizing neither normal nor abnormal samples for segmenting any anomalies in countless objects. Recently, foundation models, e.g., SAM <cit.> and CLIP <cit.>, exhibit great zero-shot visual perception abilities by retrieving prior knowledge stored in these models via prompting <cit.>. In this work, we first construct a vanilla baseline, i.e., Segment Any Anomaly (SAA), by cascading prompt-guided object detection <cit.> and segmentation foundation models <cit.>, which serve as Anomaly Region Generator and Anomaly Region Refiner, respectively. Following the practice to unlock foundation model knowledge <cit.>, naive language prompts, e.g., “” or “”, are utilized to segment desired anomalies for a target image. In specific, the language prompt is used to prompt the Anomaly Region Generator to generate prompt-conditioned box-level regions for desired anomaly regions. Then these regions are refined in the Anomaly Region Refiner to produce final predictions, i.e., masks, for anomaly segmentation. However, as is shown in Figure <ref>, vanilla foundation model assembly (SAA) tends to cause significant false alarms, e.g., SAA wrongly refers to all wicks as anomalies whereas only the overlong wick is a real anomaly, which we attribute to the ambiguity brought by naive language prompts. Firstly, conventional language prompts may become ineffective when facing the domain shift between pre-training data distribution of foundation models and downstream datasets for anomaly segmentation. Secondly, the degree of “” for a target depends on the object context, which is hard for coarse-grained language prompts, e.g., “”, to express exactly. Thus, to reduce the language ambiguity, we incorporate domain expert knowledge and target image context in our revamped framework, i.e., Segment Any Anomaly + (SAA+). Specifically, expert knowledge provides detailed descriptions of anomalies that are relevant to the target in open-world scenarios. We utilize more specific descriptions as in-context prompts, effectively aligning the image content in both pre-trained and target datasets. Besides, we utilize target image context to reliably identify and adaptively calibrate anomaly segmentation predictions <cit.>. By leveraging the rich contextual information present in the target image, we can accurately associate the object context with the final anomaly predictions. § STARTING FROM VANILLA FOUNDATION MODEL ASSEMBLY WITH LANGUAGE PROMPT §.§ Problem Definition: Zero-shot Anomaly Segmentation (ZSAS) The goal of ZSAS is to perform anomaly segmentation on new objects without requiring any corresponding object training data. ZSAS seeks to create an anomaly map 𝐀∈ [0,1]^h × w × 1 based on an empty training set ∅, in order to identify the anomaly degree for individual pixels in an image 𝐈∈ℝ^h × w × 3 that includes novel objects. The ZSAS task has the potential to significantly reduce data requirements and lower real-world inspection deployment costs. §.§ Baseline: Segment Any Anomaly (SAA) For ZSAS, we start by constructing a vanilla foundation model assembly, i.e., Segment Any Anomaly (SAA), as shown in Fig. <ref>, which consists of an Anomaly Region Generator and an Anomaly Region Refiner. §.§.§ Anomaly Region Generator There we base the architecture of the region detector on a text-guided open-set object detection architecture for visual grounding. Specifically, given the bounding-box-level region set ℛ^B, and their corresponding confidence score set 𝒮, the module of anomaly region generator (Generator) can be formulated as, ℛ^B, 𝒮 := Generator(𝐈,𝒯) §.§.§ Anomaly Region Refiner To generate pixel-wise anomaly segmentation results, we propose Anomaly Region Refiner to refine the bounding-box-level anomaly region candidates into an anomaly segmentation mask set through SAM <cit.>. SAM accepts the bounding box candidates ℛ^B as prompts and obtain pixel-wise segmentation masks ℛ. The module of the Anomaly Region Refiner (Refiner) can be formulated as follows, ℛ := Refiner(𝐈, ℛ^B) Till then, we obtain the set of regions in the form of high-quality segmentation masks ℛ with corresponding confidence scores 𝒮. We summarize the framework (SAA) as follows, ℛ, 𝒮 := SAA(𝐈, 𝒯_n) where 𝒯_n is a naive class-agnostic language prompt, e.g., “”, utilized in SAA. §.§ Observation: Vanilla Language Prompt Fails to Unleash the Power of Foundation Models We present some preliminary experiments to evaluate the efficacy of vanilla foundation model assembly for ZSAS. Despite the simplicity and intuitiveness of the solution, we observe a language ambiguity issue. Specifically, certain language prompts, such as “”, may fail to detect the desired anomaly regions. For instance, as depicted in Fig. <ref>, all “” is erroneously identified as an anomaly by the SAA with the “” prompt. We propose introducing multimodal prompts generated by domain expert knowledge and the target image context to reduce language ambiguity, thereby achieving better ZSAS performance. § ADAPTING FOUNDATION MODELS TO ANOMALY SEGMENTATION WITH MULTI-MODAL PROMPTS To address language ambiguity in SAA and improve its ability on ZSAS, we propose an upgraded version called SAA+, incorporating multimodal prompts, as shown in Fig. <ref>. In addition to leveraging the knowledge gained from pre-trained foundation models, SAA+ utilizes both domain expert knowledge and target image context to generate more accurate anomaly region masks. We provide further details on these multimodal prompts below. §.§ Prompts Generated from Domain Expert Knowledge To address language ambiguity, we leverage domain expert knowledge that contains useful prior information about the target anomaly regions. Specifically, although experts may not provide a comprehensive list of potential open-world anomalies for a new product, they can identify some candidates based on their past experiences with similar products. Domain expert knowledge enables us to refine the naive “” prompt into more specific prompts that describe the anomaly state in greater detail. In addition to language prompts, we introduce property prompts to complement the lack of awareness on specific properties like “” and “”  <cit.> in existing foundation models <cit.>. §.§.§ Anomaly Language Expression as Prompt To describe potential open-world anomalies, we propose designing more precise language prompts. These prompts are categorized into two types: class-agnostic and class-specific prompts. Class-agnostic prompts (𝒯_ a) are general prompts that describe anomalies that are not specific to any particular category, e.g., “” and “”. Class-specific prompts (𝒯_ s) are designed based on expert knowledge of abnormal patterns with similar products to supplement more specific anomaly details. We use prompts already employed in the pre-trained visual-linguistic dataset, e.g., “” and “”, to query the desired regions. This approach reformulates the task of finding an anomaly region into locating objects with a specific anomaly state expression, which is more straightforward to utilize foundation models than identifying “” within an object context. By prompting SAA with anomaly language prompts 𝒫^L={𝒯_ a, 𝒯_ s} derived from domain expert knowledge, we generate finer anomaly region candidates ℛ and corresponding confidence scores 𝒮. §.§.§ Anomaly Object Property as Prompt Current foundation models <cit.> have limitations when it comes to referring to objects with specific property descriptions, such as size or location <cit.>, which are important for describing anomalies, such as “” To incorporate this critical expert knowledge, we propose using anomaly property prompts formulated as rules rather than language. Specifically, we consider the location and area of anomalies. Anomaly Location. Anomalies typically locate within the inspected objects. To guarantee this, we calculate the intersection over union (IoU) between the potential anomaly regions and the inspected object. By applying an expert-derived IoU threshold, denoted as θ_IoU, we filter out anomaly candidates with IoU values below this threshold, retaining regions that are more likely to be abnormal. Anomaly Area. The size of an anomaly, as reflected by its area, is also a property that can provide useful information. In general, anomalies should be smaller than the size of the inspected object. Experts can provide a suitable threshold value θ_area for the specific type of anomaly being considered. Candidates with areas unmatched with θ_area·Object Area can then be filtered out. By combining the two property prompts 𝒫^P={θ_area, θ_IoU}, we can filter the set of candidate regions ℛ to obtain a subset of selected candidates ℛ^P with corresponding confidence scores 𝒮^P using the filter function (Filter), ℛ^P, 𝒮^P := Filter(ℛ, 𝒫^P) §.§ Prompts Derived from Target Image Context Besides incorporating domain expert knowledge, we can leverage the information provided by the input image itself to improve the accuracy of anomaly region detection. In this regard, we propose two prompts induced by image context. §.§.§ Anomaly Saliency as Prompt Predictions generated by foundation models like <cit.> using the prompt “” can be unreliable due to the domain gap between pre-trained language-vision datasets <cit.> and targeted anomaly segmentation datasets <cit.>. To calibrate the confidence scores of individual predictions, we propose Anomaly Saliency Prompt mimicking human intuition. In specific, humans can recognize anomaly regions by their discrepancy with their surrounding regions <cit.>, i.e., visual saliency could indicate the anomaly degree. Hence, we calculate a saliency map (𝐬) for the input image by computing the average distances between the corresponding pixel feature (𝐟) and its N nearest neighbors, 𝐬_ij := 1/N∑_𝐟∈ N_p(𝐟_ij)(1- ⟨𝐟_ij,𝐟⟩) where (i,j) denotes to the pixel location, N_p(𝐟_ij) denotes to the N nearest neighbors of the corresponding pixel, and ⟨·, ·⟩ refers to the cosine similarity. We use pre-trained CNNs from large-scale image datasets <cit.> to extract image features, ensuring the descriptiveness of features. The saliency map indicates how different a region is from other regions. The saliency prompts 𝒫^S are defined as the exponential average saliency value within the corresponding region masks, 𝒫^S := {exp(∑_i j𝐫_i j𝐬_i j/∑_i j𝐫_i j) | 𝐫∈ℛ^P } The saliency prompts provide reliable indications of the confidence of anomaly regions. These prompts are employed to recalibrate the confidence scores generated by the foundation models, yielding new rescaled scores 𝒮^S based on the anomaly saliency prompts 𝒫^S. These rescaled scores provide a combined measure that takes into account both the confidence derived from the foundation models and the saliency of the region candidate. The process is formulated as follows, 𝒮^S := { p · s | p ∈𝒫^S, s ∈𝒮^P } §.§.§ Anomaly Confidence as Prompt Typically, the number of anomaly regions in an inspected object is limited. Therefore, we propose anomaly confidence prompts 𝒫^C to identify the K candidates with the highest confidence scores based on the image content and use their average values for final anomaly region detection. This is achieved by selecting the top K candidate regions based on their corresponding confidence scores, as shown in the following, ℛ^C, 𝒮^C := Top_K(ℛ^P,𝒮^S) Denote a single region and its corresponding score as 𝐫^C and s^C, we then use these K candidate regions to estimate the final anomaly map, 𝐀_ij := ∑_𝐫^C ∈ℛ^C𝐫^C_i j· s^C/∑_𝐫^C ∈ℛ^C𝐫^C_i j With the proposed multimodal prompts (𝒫^L, 𝒫^P,𝒫^S, and 𝒫^C), SAA is regularized and updated into our final framework, i.e., Segment Any Anomaly + (SAA+), which makes more reliable anomaly predictions. § EXPERIMENTS In this section, we first assess the performance of SAA/SAA+ on several anomaly segmentation benchmarks. Then, we extensively study the effectiveness of individual multimodal prompts. §.§ Experimental Setup Datasets. We leverage two datasets with pixel-level annotations: VisA <cit.> and MVTec-AD <cit.>, both of which comprise a variety of object subsets, e.g., circuit boards. Evaluation Metrics. ZSAS performance is evaluated in terms of max-F1-pixel (ℱ_p)  <cit.>, which measures the F1-score for pixel-wise segmentation at the optimal threshold. Implementation Details. We adopt the official implementations of GroundingDINO <cit.> and SAM <cit.> to construct the vanilla baseline (SAA). Details about the prompts derived from domain expert knowledge are explained in the supplementary material. For the saliency prompts induced from image content, we utilize the WideResNet50 <cit.> network, pre-trained on ImageNet <cit.>, and set N=400 in line with prior studies <cit.>. For anomaly confidence prompts, we set the hyperparameter K as 5 by default. Input images are fixed at a resolution of 400 × 400. §.§ Main Results Methods for Comparison. We compare our final model, i.e., Segment Any Anomaly + (SAA+) with several concurrent state-of-the-art methods, including WinClip <cit.>, UTAD <cit.>, ClipSeg <cit.>, and our vanilla baseline (SAA). For WinClip, we report its official results on VisA and MVTec-AD. For the other three methods, we use official implementations and adapt them to the ZSAS task. Quantitative Results: As is shown in Table <ref>, SAA+ method outperforms other methods in ℱ_p by a significant margin. Although WinClip <cit.>, ClipSeg <cit.>, and SAA also use foundation models, SAA+ better unleash the capacity of foundation models and adapts them to tackle ZSAS. The remarkable performance of SAA+ meets the expectation to segment any anomaly without training. Qualitative Results: Fig. <ref> presents qualitative comparisons between SAA+ and previous competitive methods, where SAA+ achieves better performance. Moreover, the visualization shows SAA+ is capable of detecting all kinds of anomalies. §.§ Ablation study In Table <ref>, we perform component-wise analysis to ablate specific prompt designs in our framework, which verifies the effectiveness of all the multimodal prompts, including language prompt (𝒫^L), property prompt (𝒫^P), saliency prompt (𝒫^S), and confidence prompt (𝒫^C). § CONCLUSION In this work, we explore how to segment any anomaly without any further training by unleashing the full power of modern foundation models. We owe the struggle of adapting foundation model assembly to anomaly segmentation to the prompt design, which is the key to controlling the function of off-the-shelf foundation models. Thus, we propose a novel framework, i.e., Segment Any Anomaly +, to leverage multimodal prompts derived from both expert knowledge and target image context to regularize foundation models free of training. Finally, we successfully adapt multiple foundation models to tackle zero-shot anomaly segmentation, achieving new SoTA results on several benchmarks. unsrt 10 cao2023segment Yunkang Cao, Xiaohao Xu, Chen Sun, Yuqi Cheng, Zongwei Du, Liang Gao, and Weiming Shen. Segment any anomaly without training via hybrid prompt regularization. arXiv preprint arXiv:2305.10724, 2023. liu2023grounding Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. kirillov2023segment Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. deng2022anomaly Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9737–9746, 2022. cao2022informative Yunkang Cao, Qian Wan, Weiming Shen, and Liang Gao. Informative knowledge distillation for image anomaly segmentation. Knowledge-Based Systems, 248:108846, 2022. wan_position_2022 Qian Wan, Yunkang Cao, Liang Gao, Weiming Shen, and Xinyu Li. Position encoding enhanced feature mapping for image anomaly detection. In 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), pages 876–881. IEEE, 2022. cao_collaborative_2023 Yunkang Cao, Xiaohao Xu, Zhaoge Liu, and Weiming Shen. Collaborative discrepancy optimization for reliable image anomaly localization. IEEE Transactions on Industrial Informatics, pages 1–10, 2023. bergmann2019mvtec Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. MVTec AD – A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 9592–9600, 2019. baur_autoencoders_2021 Christoph Baur, Stefan Denner, Benedikt Wiestler, Nassir Navab, and Shadi Albarqouni. Autoencoders for unsupervised anomaly segmentation in brain mr images: a comparative study. Medical Image Analysis, 69:101952, 2021. radford2021learning Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021. ju_prompting_2022 Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, and Weidi Xie. Prompting visual-language models for efficient video understanding. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV, pages 105–124. Springer, 2022. jia_visual_2022 Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, pages 709–727. Springer, 2022. zang_unified_2022 Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and Chen Change Loy. Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225, 2022. shen_multitask_2022 Sheng Shen, Shijia Yang, Tianjun Zhang, Bohan Zhai, Joseph E Gonzalez, Kurt Keutzer, and Trevor Darrell. Multitask vision-language prompt tuning. arXiv preprint arXiv:2211.11720, 2022. zhou_learning_2022 Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. Int J Comput Vis, 130(9):2337–2348, 2022. clipseg2022 Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086–7096, 2022. jeong2023winclip Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero-/few-shot anomaly classification and segmentation. arXiv preprint arXiv:2303.14814, 2023. object_calibration Xiaohao Xu, Jinglu Wang, Xiang Ming, and Yan Lu. Towards robust video object segmentation with adaptive object calibration. In Proceedings of the 30th ACM International Conference on Multimedia, pages 1–10, 2022. xu2022reliable Xiaohao Xu, Jinglu Wang, Xiao Li, and Yan Lu. Reliable propagation-correction modulation for video object segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2946–2954, 2022. paiss_count_2023 Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. Teaching clip to count to ten. arXiv preprint arXiv:2302.12066, 2023. Laion400 Christoph Schuhmann, Robert Kaczmarczyk, Aran Komatsuzaki, Aarush Katta, Richard Vencu, Romain Beaumont, Jenia Jitsev, Theo Coombes, and Clayton Mullis. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. In NeurIPS Workshop Datacentric AI. Jülich Supercomputing Center, 2021. zou2022spot Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. SPot-the-Difference self-supervised pre-training for anomaly detection and segmentation. In Proceedings of the European Conference on Computer Vision, 2022. aota2023zero Toshimichi Aota, Lloyd Teh Tzer Tong, and Takayuki Okatani. Zero-shot versus many-shot: Unsupervised texture anomaly detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5564–5572, 2023. hinton2012imagenet Geoffrey E Hinton, Alex Krizhevsky, and Ilya Sutskever. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25(1106-1114):1, 2012. zagoruyko2016wideresnet Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Edwin R. Hancock Richard C. Wilson and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 87.1–87.12. BMVA Press, September 2016.
http://arxiv.org/abs/2306.06850v1
20230612035050
Volume-DROID: A Real-Time Implementation of Volumetric Mapping with DROID-SLAM
[ "Peter Stratton", "Sandilya Sai Garimella", "Ashwin Saxena", "Nibarkavi Amutha", "Emaad Gerami" ]
cs.RO
[ "cs.RO", "cs.CV" ]
1]Peter [email protected] 1]Sandilya Sai [email protected] 1]Ashwin Saxena [email protected] 1]Nibarkavi Amutha 1]Emaad Gerami [1]Robotics Department, University of Michigan, Ann Arbor, USA Volume-DROID: A Real-Time Implementation of Volumetric Mapping with DROID-SLAM [ July 31, 2023 ================================================================================ empty empty This paper presents Volume-DROID, a novel approach for Simultaneous Localization and Mapping (SLAM) that integrates Volumetric Mapping and Differentiable Recurrent Optimization-Inspired Design (DROID). Volume-DROID takes camera images (monocular or stereo) or frames from a video as input and combines DROID-SLAM, point cloud registration, an off-the-shelf semantic segmentation network, and Convolutional Bayesian Kernel Inference (ConvBKI) to generate a 3D semantic map of the environment and provide accurate localization for the robot. The key innovation of our method is the real-time fusion of DROID-SLAM and Convolutional Bayesian Kernel Inference (ConvBKI), achieved through the introduction of point cloud generation from RGB-Depth frames and optimized camera poses. This integration, engineered to enable efficient and timely processing, minimizes lag and ensures effective performance of the system. Our approach facilitates functional real-time online semantic mapping with just camera images or stereo video input. Our paper offers an open-source Python implementation of the algorithm, available at <https://github.com/peterstratton/Volume-DROID>. Index terms — Simultaneous Localization and Mapping (SLAM), DROID-SLAM, Bayesian Kernel Inference (BKI) Mapping. § INTRODUCTION Simultaneous Localization and Mapping (SLAM) is a fundamental mobile robotics problem in which a robot constructs a map of its environment and localizes itself within the map. Previous SLAM approaches have focused on using particle filters <cit.>, extended Kalman filters <cit.>, and graph-based optimization methods <cit.> to complete localization, but recently the focus has shifted to applying machine learning to graph-based SLAM techniques <cit.>. Our paper focuses on enhancing the mapping representation of DROID-SLAM (Differentiable Recurrent Optimization-Inspired Design SLAM) <cit.> by incorporating a volumetric mapping representation. We propose the application of Convolutional Bayesian Kernel Inference Neural Networks (ConvBKI) <cit.> to generate a 3D semantic map of the environment using camera poses optimized by DROID-SLAM. By integrating a 3D semantic map, our approach improves the accuracy and completeness of environmental information for autonomous navigation. We adapt DROID-SLAM for 3D semantic mapping, integrate it with a simulated robot platform, and evaluate its performance using simulation data. The potential contributions of our work to mobile robotics and visual SLAM, particularly for 3D mapping and autonomous navigation, are significant. § RELATED WORK §.§ Visual SLAM Recently, there have been significant improvements in SLAM systems, which are used to estimate the position and create a map of an environment. Visual Simultaneous Localization and Mapping (VSLAM) methods refer to using cameras for this purpose, and are preferred over Light Detection And Ranging (LiDAR) methods due to their lower weight, cost, and ability to provide a better representation of the environment. As a result, several VSLAM approaches have been developed using different types of cameras and have been tested in various conditions and on different datasets. This has led to a surge of interest among researchers and the development of multiple methods <cit.>. As DROID-SLAM receives images or frames from the video, it creates an image consistency graph where vertices correspond to images. If two images are at least three time steps apart and view the same scene, they are considered to be connected. This connection is referred to as an edge. Each new frame is then passed through a feature extraction neural network. The extracted dense features are used to compute a 4D correlation volume (C_ij) via dot product of the edges, where a correlation lookup operator is defined. The correlation lookup operator (L_r) is used to take a grid of image coordinates as inputs in order to output the correlation between the features at each coordinate. From this point, the correspondence between each pair of frames in the image consistency graph is calculated and used to index the correlation volume as well as determine the optical flow between the two frames. This correspondence field (p_ij) consists of the camera model (∏_c) mapping a set of 3D points onto the image and ∏^-1_c is the inverse projection function mapping inverse depth map d and coordinate grid p_i to a 3D point cloud. The correlation features and flow features are each mapped through two Convolutional Neural Network (CNN) layers before being injected into a Gated Recurrent Update (GRU) <cit.>. The "learning" of the DROID-SLAM occurs in the update operator and specifically in ConvGRU as the GRU has a gating mechanism that selectively updates and rejects information (features from feature vector) thus controlling information flow. The output of ConvGRU is revised flow field (r_ij) and its corresponding confidence weight (w_ij). The revised field information is concatenated with the context features from the poses and depths giving out corrected correspondence (p^*_ij). The corrected correspondence along with the confidence weight is passed in to Dense Bundle Adjustment layer (DBA) which maps the set of flow revisions into a set of pose and pixel-wise depth updates. A sequence of these operations times the cost function |E| results in optimized camera poses and optimized depth. It is to be noted that DROID-SLAM refers to depth as the inverse depth. §.§ Convolution BKI Convolutional Bayesian Kernel Inference (ConvBKI) is a differentiable 3D semantic mapping algorithm which combines reliability and trustworthiness of classical probabilistic mapping algorithms with the efficiency and optimizability of modern neural networks. ConvBKI layer explicitly performs Bayesian inference within a depthwise separable convolution layer. 3D points are assigned semantic labels from off-the shelf semantic segmentation networks, and grouped into voxels by summing coinciding points. The constructed semantic volumes are convolved with a depthwise filter to perform a Bayesian update on a semantic 3D map in real-time. ConvBKI learns a distribution to geometrically associate points with voxels. A compound kernel enables ConvBKI to learn more expressive semantic-geometric distributions <cit.>. §.§ Dilated Convolution for Semantic Segmentation To create a volumetric map, we need a semantic segmentation network. Older models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction problems such as semantic segmentation are structurally different from image classification. Dilation convolutional networks are specifically designed for dense prediction. Dilated convolutions are used to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. Dilation convolutions networks have better accuracy than other state-of-the-art semantic segmentation models <cit.>. § METHODOLOGY §.§ Overall Process The process for Volume-DROID begins with utilizing the TartanAir neighborhood dataset, a virtual environment created in Unreal Engine, which eliminates the need to collect data for creating a custom environment. Images or frames of the environment are then input to DROID-SLAM to generate optimized camera poses. The optimized camera poses are used to generate a point cloud, with each pose stored as a 3D point. Subsequent images or frames are processed through DROID-SLAM to update the point cloud. Finally, the collection of 3D poses is passed to ConvBKI, which converts them into a semantic volumetric map of the environment using voxels. Figure <ref> provides a visual representation of this process. §.§ From Pose and Depth to 3D Point Cloud To generate a 3D point cloud in the world frame, we combined the optimized SE(3) pose output from DROID-SLAM with the RGB-D frames obtained from TartanAir. The process involves leveraging the SE(3) robot pose to establish the camera's position and orientation relative to the world frame. The TartanAir RGB-Depth frames provide the necessary RGB and depth values, along with any additional information required for point cloud generation. The camera intrinsic parameters, including the focal length and principal point, are also used to map the 2D image coordinates and convert them to 3D world coordinates. The camera intrinsic parameter matrix, denoted as K, is represented by Equation <ref>, where f_x and f_y correspond to the focal lengths in pixels along the x and y directions, respectively. The camera frame centers are denoted by c_x and c_y along the x and y directions, respectively, and the skew coefficient, denoted as S, is assumed to be 0. The matrix K_h represents the homogeneous form of matrix K. K = [ f_x S c_x; 0 f_y c_y; 0 0 1 ], K_h = [ f_x S c_x 0; 0 f_y c_y 0; 0 0 1 0; 0 0 0 1 ], The primary matrix equation used in our point cloud projection is shown in <ref>. The left-hand side (LHS) is the homogeneous representation of the point cloud coordinates, with z representing the depth of each pixel from the u-v depth map. Equation <ref> must be computed for each pixel in the u-v depth map. [ x; y; z; 1 ] = z [ R_3×3 t_3×1; 0_1×3 1 ]^-1[ K_3×3 0_3×1; 0_1×3 1 ]^-1[ u; v; 1; 1/z ] Considering a resolution of 640 x 480 pixels for both depth and RGB frames, the computation in Equation (2) needs to be performed 307,200 times in total. To reduce the computational burden, we derived the inverse of matrix K_h, denoted as K_h^-1, as illustrated in Equation (3). K_h^-1 = [ K_3×3 0_3×1; 0_1×3 1 ]^-1 = [ 1/f_x -S/(f_xf_y) (Sc_y - c_xf_y)/(f_xf_y) 0; 0 1/f_y -c_y/f_y 0; 0 0 1 0; 0 0 0 1 ] Instead of computing the 3D world coordinate for each u-v pixel incrementally, we performed a batch computation on a version of Equation <ref>, as shown in Equations <ref> and <ref>. The batch computation procedure begins with finding the matrix product of the SE(3) pose, the intrinsic values of the camera, and a batch matrix of u-v pixels shown on the right-hand side (RHS) of Equation <ref>. Prior to the element-wise multiplication of the RHS of <ref> with the corresponding z_i matrix, we denote the world frame coordinates on the LHS of equation <ref> with a z- superscript. [ x_1 x_2 … x_i; y_1 y_2 … y_i; z_1 z_2 … z_i; 1 1 … 1 ]^z-_4× i=307200 = [ R_3×3 t_3×1; 0_1×3 1 ]^-1_4×4[ K_3×3 0_3×1; 0_1×3 1 ]^-1_4×4 ×[ u_1 u_2 … u_i; v_1 v_2 … v_i; 1 1 … 1; 1/z_1 1/z_2 … 1/z_i ]_4× i=307200 The final batch computation for obtaining world coordinates of each pixel from the u-v depth map is shown in Equation <ref>, where we apply element-wise multiplication (denoted by ⊙) between the world coordinates prior to z-multiplication (z- superscript matrix) and a row vector of corresponding z_i. [ x_1 x_2 … x_i; y_1 y_2 … y_i; z_1 z_2 … z_i; 1 1 … 1 ]_4× i=307200 = [ z_1 z_2 … x_i ]_1× i=307200⊙[ x_1 x_2 … x_i; y_1 y_2 … y_i; z_1 z_2 … z_i; 1 1 … 1 ]^z-_4× i=307200 The matrix computation in Equation <ref> is completed in real-time as the SE(3) pose and RGB-D frames enter this portion of the SLAM pipeline. Each point from the point cloud is then allocated a voxel based on ConvBKI. Figure <ref> visually presents the point cloud projection and Convolutional BKI process, facilitating the reader's understanding of the operation sequence. § EXPERIMENTS AND RESULTS To evaluate our algorithm, we ran inference on the neighborhood section of the the TartanAir dataset <cit.>. The TartanAir dataset contains photo-realistic simulation environments in the presence of various light conditions, weather and moving objects. DROID-SLAM is trained and evaluated on multiple environments within the TartanAir dataset. However, NeuralBKI was trained and evaluated on the KITTI dataset <cit.>. to leverage preexisting models and avoid training from scratch, we evaluated Volume-DROID using the neighborhood subset of the TartanAir dataset. This subset was selected because it exhibits a higher degree of overlapping class labels with the KITTI dataset, compared to other subsets within TartanAir. By using the pretrained DROID-SLAM and NeuralBKI models, trained on the TartanAir and KITTI datasets respectively, we were able to effectively execute Volume-DROID. §.§ Implementation To demonstrate the proof of concept and evaluate the performance of Volume-DROID doign inference, our focus was on integrating trained DROID-SLAM, semantic segmentation, and NeuralBKI networks. However, for our experiments, we were only able to successfully integrate pretrained DROID-SLAM and NeuralBKI models. While we did not integrate a trained semantic segmentation network, it is important to note that our open-source Python implementation makes it straightforward to incorporate a trained semantic segmentation model. Given the unavailability of a fully trained semantic segmentation network, we leveraged the semantic segmentation ground truth labels from the TartanAir dataset directly in ConvBKI. However, this approach posed a challenge due to the disparity between the semantic classes in TartanAir and the KITTI dataset. To mitigate this issue, we performed preprocessing on the TartanAir ground truth labels and mapped them to the corresponding KITTI class labels. However, this mapping process introduced some label noise since the neighborhood subset of TartanAir contains more semantic classes than KITTI. Consequently, TartanAir classes without a direct mapping to a specific KITTI class were assigned an arbitrary KITTI class label. As a consequence of these factors, the semantic labeling exhibited some degree of noise, resulting in imperfect 2D semantic segmentation input for ConvBKI. Nevertheless, it is worth emphasizing that with a trained semantic segmentation network readily integrated into our open-source Python implementation, the overall performance of Volume-DROID in this aspect can be significantly improved. We built Docker containers with publicly available Docker images to run and test Volume-DROID. This architecture makes Volume-DROID flexible to run on any operating system or environments which are able to run Docker. Our Docker image included pre-installed libraries necessary for running the components of Volume-DROID, namely Robot Operating System (ROS) Noetic and PyTorch. Since we ran everything on a server with no default Graphical User Interface (GUI), we had to use a noVNC (an open source Virtual Network Computing client) GUI display Docker container that could be accessed from any browser. To visually inspect the 3D semantic map generated by Volume-DROID, we used ROS Visualization (RViz) within the GUI container. We used ROS to establish real-time communication between Volume-DROID and the ConvBKI functions, enabling the seamless exchange of optimized pose information. §.§ Qualitative Results By using the optimized poses obtained from DROID-SLAM, we successfully visualized the voxels. However, we encountered difficulties in generating semantic segmentation results due to issues with the pre-trained model trained on the KITTI dataset. As shown in Figure <ref>, the RViz visualization of the Volume-DROID output reveals discernible outlines of shapes, such as the ground, building sides, and roofs. Nevertheless, the non-uniform 3D segmentation of the ground may be attributed to incorrect labels introduced during the TartanAir-to-KITTI class matching process. Furthermore, since ConvBKI was trained on KITTI data and not specifically evaluated on TartanAir data, its ability to generalize to the TartanAir dataset remains uncertain. Consequently, the patchy semantic labeling observed may be an artifact of ConvBKI's limited generalization to the TartanAir dataset. §.§ Quantitative Results DROID-SLAM measured three separate error scores during its tests. The first of these scores was the Absolute Trajectory Error (ATE), which can be described as a comparison between a robot's computed trajectory and its actual, traveled trajectory. The second score was the Relative Pose Error (RPE), which compared the reconstructed relative transformations between nearby poses to the actual relative transformations. The RPE returned two values due to the fact that it compares a pair of timestamps. The last of the three scores was the KITTI score, which provided the translational and rotational errors for all possible subspace lengths. The values found in the tests can be found in Table 1. No semantic segmentation results were obtained, so no quantitative results are provided. § DISCUSSION §.§ Analysis of Results The analysis of the results indicates that DROID-SLAM performed well, demonstrating the effectiveness of the Volume-DROID algorithm. However, there are still opportunities for improvement within the system. A limited time frame prevented us from fully training the algorithms. Specifically, training DROID-SLAM alone would have taken a week using four RTX 3090s, which was not feasible within our project timeline. To overcome this constraint, we prioritized developing a proof of concept/minimum viable product that demonstrates a real-time workable pipeline. Therefore, the focus of this paper is on the construction of the pipeline and showcasing its functionality, rather than presenting fully trained models or quantitative results. Additionally, the time constraints hindered the implementation of a semantic segmentation network within the pipeline. Integrating a trained semantic segmentation network would have potentially resulted in further improvements to the algorithm, leading to even lower error scores than those presented in Table 1. §.§ Future Work If more time and computational resource had been available, two key areas would have been the focus of attention: training a mapper for DROID-SLAM and retraining ConvBKI. Training a semantic mapper on DROID-SLAM's data could result in improved inference performance. Similarly, retraining ConvBKI would enhance the accuracy of its semantic map. Following the completion of training tests for both DROID-SLAM and ConvBKI, the next step would be to evaluate Volume-DROID on a robot equipped with only a camera, as opposed to using TartanAir's purely simulation data. This real-world deployment would provide valuable insights into the algorithm's performance and robustness. Additionally, integrating Volume-DROID with Hierarchical Representation <cit.> could be explored. This integration would enable a more abstract semantic representation of the environment, leading to improved planning capabilities and a better understanding of the environment for both humans and robots. It is important to note that due to the limited training time available, these further improvements and integrations were not pursued in this project. However, they represent potential avenues for future research and development of Volume-DROID, through bridging the theory in this paper and our open-source implementation. § CONCLUSION In this paper, we presented Volume-DROID, a novel approach for real-time simultaneous localization and mapping (SLAM) that integrates volumetric mapping with DROID-SLAM. Our approach combines DROID-SLAM, point cloud registration, an off-the-shelf semantic segmentation network, and Convolutional Bayesian Kernel Inference (ConvBKI) to generate a 3D semantic map of the environment and provide accurate localization for the robot. We introduced the concept of real-time fusion of DROID-SLAM and ConvBKI through the generation of point clouds from RGB-Depth frames and optimized camera poses. This integration allows for efficient and timely processing, minimizing lag and ensuring effective performance of the system. By incorporating a 3D semantic map, our approach improves the accuracy and completeness of environmental information for autonomous navigation. We provided a detailed overview of the methodology and the overall process of Volume-DROID, including the steps involved in generating a 3D point cloud from camera poses and RGB-D frames. We also discussed the significance of using dilated convolutions for semantic segmentation and the potential of ConvBKI for real-time 3D semantic mapping. We believe that the contributions of our work to the field of mobile robotics and visual SLAM, particularly in the context of 3D mapping and autonomous navigation, are significant. The open-source Python implementation of the Volume-DROID algorithm further enhances its accessibility and encourages its adoption by the research community. In future work, we plan to evaluate the performance of Volume-DROID using real-world data and conduct experiments to validate its effectiveness in various environments. We also aim to explore further optimizations and refinements to improve the efficiency and accuracy of the system. Overall, we are optimistic about the potential of Volume-DROID to contribute to the advancement of SLAM techniques and enable more robust and reliable autonomous systems. -12cm * IEEEtran
http://arxiv.org/abs/2306.03265v1
20230605213446
An Electroacoustic Lumped Element Model of a Dielectric Elastomer Membrane
[ "C. Solano", "Y. Zhang", "L. N. Cattafesta III" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
An Electroacoustic Lumped Element Model of a Dielectric Elastomer Membrane]An Electroacoustic Lumped Element Model of a Dielectric Elastomer Membrane Department of Mechanical Engineering, FAMU-FSU College of Engineering, Tallahassee, FL 32310 Mechanical, Materials, and Aerospace Engineering Department, Illinois Institute of Technology, Chicago, IL 60616 [email protected] Dielectric elastomers are widely studied for their use in robotic and medical devices due to their shape changing properties. Recently, they have also been incorporated into acoustic devices, motivating the development of an electroacoustic model for dielectric elastomers. This paper provides a lumped element model based on a prestretched membrane approximation of a dielectric elastomer actuated by a dc voltage. The electroacoustic model is validated via experiments using a laser Doppler vibrometer and acoustic impedance tube measurements. Good agreement between the experiments and model are demonstrated. The resulting validated model is expected to be useful in design optimization for acoustic applications. § INTRODUCTION Acoustic liners are a type of noise absorber commonly used in aircraft engines. Relatively simple acoustic liners are comprised of a perforate face sheet (FS) backed by honeycomb cells and terminated with a rigid backing. These single degree of freedom (SDOF) devices have an effective noise suppression bandwidth of approximately one octave <cit.>. Absorption can be increased at the expense of added size and weight to approximately two octaves by increasing the honeycomb depth and adding a second perforate layer, creating what is known as a two degree of freedom (2DOF) liner <cit.>. More complex liners have been formulated to further increase absorption bandwidth. For example, active liners have been created that can change their physical dimensions, such as modifying the cavity volume <cit.> or changing the facesheet hole area <cit.>. These types of liners are able to change their characteristic resonance frequency. Another well known type of adaptive liner is the bias flow liner. In this design, flow enters from the back of the cavity and exits through the FS resulting in a wide tunable sound absorption range as a function of the variable flow rate <cit.>. However, active designs that require moving parts or bias flow can be heavy, complex, and require energy. With the advancement of smart materials, researchers have devised creative ways to simplify adaptive liner technology while still obtaining broadband absorption. For example, changes in liner geometry have been performed through the use of shape memory materials. The cavity volume was varied using shape memory polymers in Hermiller et al. <cit.>, with a resultant 500 - 600 Hz modification of the resonance frequency. Kreitzman et al. <cit.> and Dodge et al. <cit.> incorporated a shape memory alloy wire into a multilayered FS, changing the effective hole diameter of the FS and modifying the resonance frequency by 350 Hz. Liu et al. <cit.> replaced the rigid backing of a simplified acoustic liner (a Helmholtz resonator) with a piezoceramic diaphragm coupled to a passive electrical shunt network, allowing a tunable absorption range. Although compact and simple, a major disadvantage is that the piezoceramic disc is stiff relative to the cavity. This provides poor coupling, which limits the tunable range of the device. Their results motivated a softer active material that would better couple with the acoustics of the cavity. An example of a compliant material with modifiable properties is a dielectric elastomer (DE) - a smart material capable of changing shape when subjected to an electric field <cit.>. This material has been incorporated into acoustic liners by Abbad et al. <cit.> and Dodge et al. <cit.>. Abbad et al. <cit.> replaced the rigid FS with a DE and was able to actuate the DE to modify the solid portion of the FS compliance. This shifted the resonance of the liner by 32 Hz and also achieved sound attenuation below 500 Hz. The work in Dodge et al. <cit.> split the cavity of an acoustic liner with a DE and saw a shift in the resonance frequency of approximtately 100 Hz, or 11%. Researchers have incorporated the stress reduction of a DE into high fidelity finite element models to simulate the hyperelastic properties of dielectric elastomers <cit.> when subjected to electric fields. Analytical models encompassing the complexities of hyperelastic material properties of DEs were developed using a Kelvin model <cit.> and Kelvin Voigt model <cit.> to capture its response to an electric field in terms of speed and relaxation <cit.>. Others have developed a lumped parameter model for strip-shaped dielectric elastomer membrane transducers <cit.>. Further simplified models have incorporated variations of the stress expression, Eq. <ref>, into dynamical models to determine how the resonance frequency of a DE is affected by voltage <cit.>. These models are concerned with the mechanical response of the DE when subjected to a variable voltage loading and how its viscoelastic properties affect the corresponding time response in terms of speed and relaxation. The current paper is specifically focused on developing an electroacoustic lumped element model (LEM) of a DE membrane subjected to a static voltage and time-harmonic, uniform pressure loading experienced in acoustic applications. In an effort to adjust the in-plane stress to tune the stiffness of the DE membrane, this paper develops an electroacoustic lumped element model for a uniformly biaxial tensioned DE membrane subject to constant voltage loading for acoustic applications. The resulting model is experimentally validated and thus enables the design of a DE membrane in an acoustic liner application. The paper is organized as follows. In Section <ref>, the LEM parameters and fundamental frequency of a DE membrane are derived based on its quasi-static response to a uniform pressure loading. The predicted response is a function of prestretch and applied pressure. Section <ref> describes the fabrication procedure of the DE membrane sample as well as the experimental setup using an acoustic impedance tube and laser Doppler vibrometer. The stress dependence on voltage (Eq. <ref>) will then be substituted into the resonance expression and compared to experimental results in Section <ref> to validate the DE membrane model. Finally, Section <ref> will provide conclusions and future work on this topic. § LUMPED ELEMENT MODEL Dielectric elastomers are polymer films with a thickness and bending stiffness dependent on the material. Generally, a DE film can be thinner than 100 microns, especially if prestretched as is the case here. If the material is very thin and cannot support a bending moment, it can be modeled as a tensioned membrane. Therefore, the problem of interest is the 2-D damped wave equation with the configuration as illustrated in Figure <ref>. The governing equation can be written as <cit.> c^2 ∇^2 w - ∂^2w/∂ t^2 - R/ρ_a∂ w/∂ t = q/ρ_a, where c is the wave speed of the membrane given by c^2 = σ/ρ, the stress (σ) is given by Eq. <ref>, ρ is the membrane density, ρ_a=ρ h is the areal density, w is the membrane displacement, R is damping, and q is the applied pressure. The initial stress in Eq. <ref> can be expressed by the Yeoh model <cit.> as σ_0(λ) = 2 ( λ^2 - 1/λ^4) ∑_i=1^3 iC_i0(I_1 - 3 )^i-1, where the first strain invariant is I_1 = 2λ^2 + 1/λ^4, and the constants (C_i0) for Elastosil Film 2030, the material used here, are shown in Table <ref>. The pinned boundary conditions are w(0,y,t) = w(a,y,t) = w(x,0,t) = w(x,b,t) = 0, where a and b are the rectangular membrane side lengths as shown in Figure <ref>. Assuming a uniform time-harmonic pressure loading, q_0e^jΩ t (j=√(-1)), and using separation of variables, the modal solution is of the form w(x̂,ŷ,t̂) = ∑_m=1^∞∑_n=1^∞sin(mπ k x̂)sin(nπŷ) T_mn(t̂), where x̂ = x/b, ŷ = y/b, t̂=ω_11 t, k=b/a is the membrane aspect ratio (1 for a square membrane), q_0 is a uniform pressure loading, and T_mn is the time dependent solution. T_mn may be transformed into a dimensionless displacement given by T̂ = T_mnσ h /q_0b^2. Substituting Eq. <ref> into Eq. <ref> and simplifying yields the dimensionless differential equation T̈̂̈ + 2ζ_mnω̂_mnṪ̂̇ + ω̂_mn^2T̂ = q̂_mne^jΩ̂t̂, m = n = 1,3,5,… The damping ratio (ζ_mn), normalized resonance frequency (ω̂_mn), and normalized amplitude of the forcing function (q̂_mn) are ζ_mn = ζ_11√(k^2+1)/√(m^2k^2+n^2), ω̂_mn = ω_mn/ω_11 = √(m^2k^2+n^2)/√(k^2+1), and q̂_mn = 16/mnπ^4[k^2+1]. Here, ζ_11 is the damping ratio for the fundamental mode, ω_mn is the radian resonance frequency that is dependent on mode (m,n), ω_11 is the fundamental resonance frequency, and Ω̂ = Ω/ω_11. The governing equation is an SDOF system such that the modal resonance frequencies are f_mn = ω_mn/2π = c/2 b√((m k)^2+n^2). Since c^2 = σ/ρ, the fundamental resonance frequency, m=n=1, for a square membrane (k=1) is thus f_11=1/b√(2)√(σ/ρ)≈0.707/b√(σ/ρ). For a uniform pressure loading, q_0, the dimensionless static deflection is δ̂(x̂,ŷ) = 16/π^4∑_m=1^∞∑_n=1^∞sin(mπ k x̂) sin(n πŷ) /mn[(mk)^2+n^2], m = n = 1,3,5,… where δ̂ = δ (σ h /q_0b^2) is the dimensionless deflection. §.§ Acoustic Impedance The static displacement solution, Eq. <ref>, can be used to determine the acoustic impedance of the DE in terms of lumped parameters as described in Merhaut <cit.> and Beranek et al. <cit.>. The lumped acoustic compliance can be calculated by relating it to the ratio of volume displacement to the applied uniform pressure load with no applied voltage C_aM = ΔVol/q_0, where the subscript a refers to acoustic, M refers to membrane, and the variable ΔVol is given by ΔVol = q_0b^4/σ h∫_0^1 ∫_0^1/kδ̂ dx̂dŷ. Substituting the volume displacement expression into Eq. <ref>, evaluating the summation for δ̂ (Eq. <ref>), and assuming a square membrane (k=1) yields C_aM = 0.0351 b^4/σ h. The lumped acoustic mass can be found by equating the distributed kinetic energy to that of an acoustic mass, resulting in M_aM = ρ h b^2/(Δ Vol)^2∫_0^1∫_0^1/k(q_0 b^2/σ h)^2 δ̂^2 dx̂dŷ. Evaluating the integral and simplifying yields the acoustic lumped mass M_aM = 1.3785 ρ h/b^2. The lumped resistance for the fundamental frequency (m,n)=(1,1) is given by R_aM = 2ζ_11√(M_aM/C_aM). The final acoustic impedance expression is given by Z_aM = sM_aM + 1/sC_aM + R_aM, where s = j ω and ω is the radian frequency. Finally, the resonance frequency in the lumped approximation is given by f_res = 1/2 π√(M_amC_aM)=0.7235/b√(σ/ρ), which differs from the exact value given in Eq. <ref> by 2.3%. §.§ Acoustic Radiation Mass The physical parameters derived thus far are for a membrane vibrating in a vacuum, while the sample is actually vibrating in air. The air surrounding the membrane exerts a pressure force on it that can be considered a complex radiation impedance, which reduces to an acoustic mass at low values of ν b, where ν=ω/c is the acoustic wavenumber. This mass must be included in Eq. <ref>. We make the standard assumption here that the radiation mass of a membrane is approximately the same as an infinite baffled piston. The normalized specific acoustic impedance of a rectangular piston at low values of ν b is given in Mellow et al. <cit.>. Using this approach, a lumped acoustic radiation mass is given by M_aP = 0.946 b ρ/2S, where S=b^2 is the area of the square membrane. Note that this is for the case of a square piston in an infinite baffle. However, the tested sample in the current study acts as a recessed square piston, where air in the recess is also moving in unison with the piston at low frequencies. The mechanical mass in the recess is m_rec = ρ S t_rec, where t_rec is the thickness of the recess. This can be converted to acoustic mass via division by S^2. Reorganizing to maintain the same form as Eq. <ref> yields M_aRec = t_rec/b/2 b ρ/2S. The total acoustic radiation mass is therefore M_aRad = M_aP + M_aRec = ( 0.946 + t_rec/b/2) bρ/2S. Substituting the recess depth (6.86 mm) and the sample side length (see Table <ref> in the following section) into the parenthesis simplifies the expression to M_aRad = 1.486 bρ/2S. §.§ Voltage Effect The physical mechanism of an actuated DE is that of a material being squeezed when an electric field is applied across a thin tensioned membrane. This process is analogous to an applied pressure and results in an areal expansion when the material is unconstrained. This results in a reduction of the in-plane stress. An alternative approach is to constrain the DE at its peripheral boundaries as shown in Figure <ref>; this approach is adopted in this paper. The DE in its initial (reference) state is shown in Figure <ref>. The DE can then be stretched in each direction by some desired prestretch λ_i = l_i/L_i, where L and l are the before and after stretch dimensions, respectively, and subscript i is the direction index (Figure <ref>). The DE is then pinned at its boundaries by a rigid frame as shown in Figure <ref>. Finally, a thin, compliant electrode is applied on either side of the DE membrane (Figure <ref>) and an electric field is applied across its thickness. The electrostatic pressure imposed on the DE is <cit.> P = ϵ_0ϵ_r (V/h)^2, where ϵ_0 is the permittivity of free space, ϵ_r is the relative permittivity of the material, V is the applied voltage across the membrane, and h is the thickness of the membrane. Equation <ref> represents the stress reduction of the initial prestretch imposed on the DE <cit.>. For the pinned configuration shown in Figure <ref>, the in-plane stress reduction due to the application of an electric field for an incompressible material is <cit.> σ = σ_0 - ϵ_0 ϵ_r (λ^2/h_0V )^2, where σ_0 is the initial prestress, λ = λ_1 = λ_2 for biaxial prestretch, and h_0 is the initial thickness of the unstretched DE membrane. The effect of voltage can be incorporated into the membrane impedance. Neglecting the mass of the electrodes, the applied voltage only modifies the membrane stress. Hence, only the lumped compliance term needs to be updated. This is accomplished by noting that h=h_0/λ^2 and substituting Eq. <ref> into the lumped compliance expression, Eq. <ref>, C_aM = 0.0351 b^4 λ^2/h_0/σ_0 - ϵ_r ϵ_0 (λ^2/h_0V )^2. The resonance frequency expression can be updated accordingly. The fundamental resonance frequency of the membrane in Eq. <ref>, can be rewritten as f = 1/2π√(1/C_aM(M_aM+M_aRad)). Substituting in Eq. <ref> yields f = 1/2π√(σ_0 - ϵ_rϵ_0 (λ^2/h_0V )^2/0.0351 b^4 λ^2/h_0 (M_aM+M_aRad) ), and normalizing by the unactuated (V=0) resonance expression yields f(V)/f(V=0) = √(σ_0 - ϵ_rϵ_0 (λ^2/h_0V )^2/σ_0). The relative permittivity (ϵ_r) is a function of both material properties and prestretch. For the material used here, Elastosil Film 2030, the relative permittivity was given in Hodgins et al. <cit.> as a function of the prestretch ϵ_r = -0.28λ + 2.76 This allows us to compare the resonance frequency variation of DE membranes as a function of both prestretch and voltage. § EXPERIMENTAL SETUP §.§ Sample Preparation The sample is a 25.4 mm square DE membrane. The material used is Elastosil Film 2030, which has an initial-unstretched thickness, h_0, of 100 microns. Before the DE is sandwiched between the two square frames, it is prestretched by the same factor, λ, in both directions. Figure <ref> shows the steps: 1) the DE membrane is laid on a flat surface; 2) an L × L square is sketched onto the surface before 3) clamping the DE on its sides and stretching. The stretching is done in two parts: the edges along direction x are stretched first followed by direction y to minimize tearing. While stretching, the original marker lines drawn on the sample (the L × L square) are stretched as well. This results in thicker reference lines, which makes it difficult to accurately estimate the stretched dimensions. This can be seen in Figure <ref> as the exaggerated difference in the thickness of the dashed lines. The maximum uncertainty of the prestretch is approximately ±0.15, caused by enlargement of the marker lines. Finally, when the nominal desired stretch is reached, the electrode is applied if needed. The DE is then adhered to the rigid square frame using Very High Bond (VHB) tape. Two types of samples are made, one with electrodes and one without. The type without electrodes has a dot(s) added to its surface using a thin silver sharpie to reflect light back to the laser vibrometer. The other sample is coated with a thin layer of carbon grease electrode. The coating is applied using a cotton swab and excess is removed with a soft spatula, resulting in a very thin layer. Carbon grease is commonly used by researchers <cit.> and is readily available for purchase. Table <ref> provides dimensions of the sample and density of both the membrane and electrodes. Note that the density of the DE membrane and carbon-grease electrode are very close. Finally, copper tape is used to connect the edge of the grease electrodes to the wires of a high voltage amplifier (Trek Model 10/40A). §.§ Acoustic Tube and Vibrometer Setup A schematic of the experimental setup is shown in Figure <ref> and depicts the Two Microphone Method (TMM) to determine the acoustic impedance of the sample <cit.>. The Brüel and Kjær (B&K) wide-spacing large circular impedance tube type 4206 with a diameter of 100 mm is used. The length between the catenoidal horn exit and the test specimen is 875 mm. Microphone 1 is closest to the speaker and the second microphone is 100 mm away from the first microphone, which is 575 mm from the test specimen. The signal sent to the speaker is either a single-frequency sinusoid or psuedo-random periodic noise covering an octave band, where the SPL at the sample face is held constant for each frequency. Normal velocity measurements of the sample are taken along the center line every 1.25 mm using a Polytec scanning laser vibrometer, type PSV 300. A laser Doppler vibrometer measures the frequency shift of a laser by the Doppler effect, which can be used to find the velocity at a point. The measurement is synced with the pseudo-random periodic noise signal sent to the speaker using a trigger signal. The standoff distance between the DE membrane sample and vibrometer is between 21.6 cm and 55.3 cm. The data is sampled at 16384 Hz and analyzed using a DFT with a frequency resolution of 1 Hz. Forty spectral averages are taken to compute the frequency response between the speaker excitation and the velocity response at multiple points. § RESULTS AND DISCUSSION The LEM prediction of the displacement response of the sample is compared to experimental measurements in the following section. The first step is to determine the prestretch applied to the sample. This is then substituted in the model of the static response (Eq. <ref>) and resonance frequency (Eq. <ref>) of the membrane, from which the effect of the excitation voltage can be determined. §.§ Membrane prestretch Although an approximate value of prestretch is known using the sketch drawn on the membrane surface, the actual prestretch falls within a range of values as explained in Section <ref>. The precise prestretch value is therefore estimated by minimizing the differences between the computed static displacement and resonance frequency, and the experimentally observed values. Both quantities are directly related to the in-plane stress (Eqs. <ref> and <ref>, respectively), which is a function of prestretch via Eq. <ref>. An example computation of the displacement and resonance frequency is shown here. The frequency response of the membrane center for an 80 dB pressure loading is plotted in Fig. <ref>. The static or dc response is estimated by measuring the center displacement at low frequencies where the response asymptotes to a constant. A zoomed view of the frequency response below 120 Hz where the response asymptotes is shown in Fig. <ref>. The mean of the response in this frequency range is approximately 51 nm with a standard deviation of 4 nm. Comparison to the static solution theory (Eq. <ref>) will be for a selected frequency in this range, 100 Hz for the analysis here. Figure <ref> also shows three peaks in the response. The first peak, and the global maximum, is the fundamental frequency used for comparison to the LEM resonant frequency Eq. <ref>. The two smaller peaks are higher-order modes where the center response reaches a relative maximum. The center response is measured for two prestretches at 100 Hz for eight SPLs and is plotted in Fig. <ref>. Each of the eight responses is compared to the static displacement solution and the prestretch is estimated by setting Eq. <ref> equal to the measured response, resulting in eight prestretch values. The average prestretch is computed and shown in the second column of Table <ref>. For a visual prestretch of 1.1 and 1.3, the approximate prestretch using the static displacement solution is 1.17 and 1.46, respectively. The frequency response is also measured and the resonant frequency identified as 390 Hz and 530 Hz for a visual prestretch of 1.1 and 1.3, respectively. The frequency value is accurate to ± 0.5 Hz for the 1 Hz frequency resolution of the measurement. The prestretch is numerically determined by setting the measured resonance equal to Eq. <ref>, noting the negligible difference between the natural and resonance frequency due to very low damping. For the visual prestretches in Table <ref>, the approximate prestretch corresponding to the measured resonance frequency is 1.13 and 1.29, respectively. These values are lower than those from the static solution and closer to the visual prestretch values. To minimize the overall uncertainty, the prestretch is varied over a range of values and the relative-difference in both the static deflection and the measured resonance frequency is calculated. The total difference of the two is taken as the square root of the sum of the squared differences and the local minimum is found. The results are listed in the last column of Table <ref> and happen to be the averaged values between the two methods because the slope near the intercept is linear. The updated prestretch values are approximately 1.15 and 1.38 for the visual prestretches of 1.1 and 1.3, respectively. The updated prestretch values will be called “prestretch” for the remainder of the paper. The static displacement and resonant frequency predictions will now be updated with the prestretch values for comparison to the experimental results. The experimentally measured center response at 100 Hz is plotted versus incident SPL together with the model (Eq. <ref>) in Fig. <ref>. The experimental values are plotted with random uncertainty estimates with a 95% confidence level calculated based on the procedures in Bendat et al. <cit.>. The model shape aligns with the measured response and falls within the estimated uncertainty, indicating an accurate prediction of the prestretch and static displacement amplitude. The membrane response remains linear, tracking the modeled response up to at least a 132 dB pressure loading, the maximum achievable SPL. Additionally, the resonant frequency is calculated based on the estimated prestretch value with the results shown in Table <ref>. The resonance is over predicted by a maximum of 9.4% for the cases tested, which is deemed reasonable for a lumped element model. §.§ Model Validation The center line displacement of the DE membrane is measured for validation of the static mode shape. The laser vibrometer measures the response across the surface at a sinusoidal forcing of 100 Hz and an incident pressure of 80 dB. Figure <ref> plots the normalized displacement response and the static model prediction given by Eq. <ref>. The experimental values are plotted with random uncertainty estimates with a 95% confidence level calculated based on the procedures in Bendat et al. <cit.>. Uncertainty in the x-axis values is determined via consideration of mapping the scanning points from the camera using the estimated control target locations. Considering the uncertainties in the measurements, the mode shape agrees well with the analytical model predictions. This validates the mode shape predicted by the model while the amplitude itself is validated in the previous section, as shown in Fig. <ref>. §.§ Membrane Actuation §.§.§ Added Mass due to Electrodes To actuate the DE membrane, electrodes must be added to its surface. This affects the membrane response since the membrane is very thin (less than 100 μm thick) and has approximately the same density as the electrodes used. To understand how the electrode loads the membrane, the measured frequency response of the membrane with and without the carbon grease electrodes is plotted in Figure <ref>. It was assumed that the electrodes only modified the mass and not the stiffness of the membrane. Starting with Eq. <ref>, we thus assume that M_aM is affected. The resonance frequencies of the membrane with and without electrodes can extracted from Figure <ref> as 444 Hz and 532 Hz, respectively, and are documented in Table <ref>. The resonance frequency expression (Eq. <ref>) can be updated to include the added electrode mass, M_aE, f_exp,E = 1/2π√(1/C_aM(M_aM + M_aRad + M_aE)). The ratio of the added acoustic mass to that of the membrane as a function of the observed fractional change, df/f, in the resonance frequency can be expressed as M_aE/M_aM+M_aRad = 1/( 1-df/f)^2 -1. The approximate value of the lumped mass due to the electrodes is 78.2 kg/m^4 using Eq. <ref> and is shown in Table <ref>. As a check, the mechanical mass (in kg) from the electrodes can be estimated by multiplying the added mass by the area squared (b^4) and dividing by the electrode density. The thickness of the electrode layer on either side of the DE membrane can then be estimated. Substituting the values from Tables <ref> and <ref>, the thickness of the electrode layer is approximately 25 μm. This value is consistent with the thickness of the carbon-grease electrode estimated via visual inspection. §.§.§ Actuation The normalized resonance frequency for both the experiment and the model (Eq. <ref>) versus voltage is shown in Figure <ref>. The resonant frequency is normalized by the no voltage case and so the value at zero voltage is unity. The resonance decreases quadratically with voltage, as expected based on Eq. <ref>. There is good agreement between experiments and theory for a majority of the range. The maximum difference for λ = 1.15 and λ = 1.38 is 2.3% at 5 kV. This is the same difference as was found between the LEM resonance frequency (Eq. <ref>) vs. the exact resonance frequency (Eq. <ref>). This indicates that the membrane model of the DE is appropriate for the material used here. § CONCLUSIONS An LEM of a square dielectric elastomer membrane was derived. The damped wave equation in Cartesian coordinates was used to derive the static deflection of a square membrane. Using the relationship between pressure loading and volume displacement as a function of static deflection, the LEM was created. The effect of a dielectric elastomer was added to the model via an actuation term expressed as voltage. Voltage was incorporated into the stress term, which for the LEM affects the membrane's acoustic compliance (i.e., 1/stiffness). The resonant frequency expression was updated accordingly and reduces quadratically with voltage. Material properties and dimensions are parameters that may be chosen to achieve the desired impedance or resonant properties. Then the dimensions, prestretch, and voltage can be designed to achieve a desired impedance or resonance of the DE membrane. A scanning laser vibrometer was used to validate the static response of the membrane. The membrane was subjected to various sound pressure levels and behaved linearly over the entire range tested. This validates the model up to 132 dB pressure loading, the maximum achievable pressure in the experimental setup. In order to actuate the DE membrane, electrodes were added to its surface. Therefore, the impedance was modified to include the effects of electrode mass. The model and experimental resonance frequency had good agreement with a maximum difference of 2.3% at 5 kV of applied voltage for the two prestretches tested. This validates the use of the membrane approximation for the DE used here. With the validation of the model, a rectangular DE membrane may now be incorporated as a lumped element of a distributed electroacoustic system. This provides analytic scaling and rapid parametric studies for the initial phase of design. We are interested in using the LEM to predict the impedance of acoustic liners with an active embedded membrane. Further development will incorporate an optimization scheme to maximize sound absorption or impedance tuning of the liner, combined with experimental demonstrations of the optimized design(s). § REFERENCES 10 hubbard1995Noise R. E. Motsinger and R. E. Kraft. Aeroacoustics of Flight Vehicles: Theory and Practice, Volume 2: Noise Control, chapter 14. Acoustical Society of America, New York, 2 edition, 1995. Neise_1980 W. Neise and G.H. Koopmann. Reduction of centrifugal fan noise by use of resonators. Journal of Sound and Vibration, 73(2):297–308, November 1980. mcdonald1992method A. M. Mcdonald, S. M. Hutchins, I. Stothers, and P. J. Crowther. Method and apparatus for attenuating acoustic vibrations in a medium, November 19 1997. EP Patent App. 92,905,463 A. MATSUHISA_1992 H. Matsuhisa, B. Ren, and S. Sato. Semiactive control of duct noise by a volume-variable resonator. JSME international journal. Ser. 3, Vibration, control engineering, engineering for industry, 35(2):223–228, 1992. Gaeta_Jr__1998 R. Gaeta Jr. and K. Ahuja. A tunable acoustic liner. In 4th AIAA/CEAS Aeroacoustics Conference. American Institute of Aeronautics and Astronautics, June 1998. AIAA Paper 1998-2298. Nagaya_2001 K. Nagaya, Y. Hano, and A. Suda. Silencer consisting of two-stage helmholtz resonator with auto-tuning control. The Journal of the Acoustical Society of America, 110(1):289–295, July 2001. dean1975duct P. D. Dean and B. J. Tester. Duct wall impedance control as an advanced concept for acoustic impression. Technical report, November 1975. No. NASA-CR-134998. Jing_1999 X. Jing and X. Sun. Experimental investigations of perforated liners with bias flow. The Journal of the Acoustical Society of America, 106(5):2436–2441, November 1999. Hermiller_2013 J. M. Hermiller and M. R. Maddux. Morphing resonators for adaptive noise reduction. The Journal of the Acoustical Society of America, 134(5):3963, 2013. Kreitzman_2020 J. R. Kreitzman, F. Calkins, D. Nicholson, A. Lafranchi, L. Cattafesta, and C. Dodge. Active acoustic liners enabled by shape memory alloy technology. In AIAA AVIATION 2020 FORUM. American Institute of Aeronautics and Astronautics, June 2020. AIAA Paper 2020-2617. Dodge_2022 C. Dodge, B. M. Howerton, and M. G. Jones. An acoustic liner with a multilayered active facesheet. In 28th AIAA/CEAS Aeroacoustics 2022 Conference. American Institute of Aeronautics and Astronautics, June 2022. AIAA Paper 2022-2902. Liu_2007 F. Liu, S. Horowitz, T. Nishida, L. Cattafesta, and M. Sheplak. A multiple degree of freedom electromechanical helmholtz resonator. The Journal of the Acoustical Society of America, 122(1):291–301, July 2007. Pelrine_1998 R. E. Pelrine, R. D. Kornbluh, and J. P. Joseph. Electrostriction of polymer dielectrics with compliant electrodes as a means of actuation. Sensors and Actuators A: Physical, 64(1):77–85, January 1998. Pelrine_2000 R. Pelrine, R. Kornbluh, Q. Pei, and J. Joseph. High-speed electrically actuated elastomers with strain greater than 100%. Science, 287(5454):836–839, February 2000. Kornbluh R. Kornbluh, R. Pelrine, J. Eckerle, and J. Joseph. Electrostrictive polymer artificial muscle actuators. In Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146). IEEE. Kornbluh_1999 R. D. Kornbluh, R. Pelrine, J. Joseph, R. Heydt, Q. Pei, and S. Chiba. High-field electrostriction of elastomeric polymer dielectrics for actuation. In Y. Bar-Cohen, editor, SPIE Proceedings. SPIE, May 1999. Abbad_2018 A. Abbad, K. Rabenorosoa, M. Ouisse, and N. Atalla. Adaptive helmholtz resonator based on electroactive polymers: modeling, characterization, and control. Smart Materials and Structures, 27(10):105029, September 2018. Dodge_2021 C. Dodge, Y. Zhang, L. N. Cattafesta, B. M. Howerton, and J. R. Kreitzman. A dielectric elastomer acoustic liner. In AIAA AVIATION 2021 FORUM. American Institute of Aeronautics and Astronautics, July 2021. AIAA Paper 2021-2244. wissler2007modeling T. Wissler. Modeling dielectric elastomer actuators. PhD thesis, ETH Zurich, 2007. Park_2012 H. S. Park, Z. Suo, J. Zhou, and P. A. Klein. A dynamic finite element method for inhomogeneous deformation and electromechanical instability of dielectric elastomer transducers. International Journal of Solids and Structures, 49(15-16):2187–2194, August 2012. Sarban_2012 R. Sarban, B. Lassen, and M. Willatzen. Dynamic electromechanical modeling of dielectric elastomer actuators with metallic electrodes. IEEE/ASME Transactions on Mechatronics, 17(5):960–967, October 2012. Hoffstadt_2015 T. Hoffstadt and J. Maas. Analytical modeling and optimization of DEAP-based multilayer stack-transducers. Smart Materials and Structures, 24(9):094001, August 2015. Rizzello_2020 G. Rizzello, P. Loew, L. Agostini, M. Fontana, and S. Seelecke. A lumped parameter model for strip-shaped dielectric elastomer membrane transducers with arbitrary aspect ratio. Smart Materials and Structures, 29(11):115030, October 2020. Kiser_2016 J. Kiser, M. Manning, D. Adler, and K. Breuer. A reduced order model for dielectric elastomer actuators over a range of frequencies and prestrains. Applied Physics Letters, 109(13):133506, September 2016. Dubois_2008 P. Dubois, S. Rosset, M. Niklaus, M. Dadras, and H. Shea. Voltage control of the resonance frequency of dielectric electroactive polymer (DEAP) membranes. Journal of Microelectromechanical Systems, 17(5):1072–1081, October 2008. leissa2011vibrations A. W. Leissa and M. S. Qatu. Vibrations of Continuous Systems., chapter 5. McGraw-Hill, 2011. holzapfel2000nonlinear A. G. Holzapfel. Nonlinear solid mechanics II, chapter 6. John Wiley & Sons, Inc., 2000. hoffstadt2018characterization T. Hoffstadt, A. Koellnberger, and J. Maas. Characterization of enhanced silicone materials for dielectric elastomer transducers. In ACTUATOR 2018; 16th International Conference on New Actuators, pages 1–4. VDE, 2018. merhaut1981theory Josef Merhaut. Theory of electroacoustics, chapter 2. McGraw-Hill College, 1981. beranek2012acoustics L. L. Beranek and T. Mellow. Acoustics: sound fields and transducers, chapter 3. Academic Press, 2012. Mellow_2016 T. Mellow and L. Kärkkäinen. Expansions for the radiation impedance of a rectangular piston in an infinite baffle. The Journal of the Acoustical Society of America, 140(4):2867–2875, October 2016. Suo_2010 Z. Suo. Theory of dielectric elastomers. Acta Mechanica Solida Sinica, 23(6):549–578, December 2010. Lu_2012 T. Lu, J. Huang, C. Jordi, G. Kovacs, R. Huang, D. R. Clarke, and Z. Suo. Dielectric elastomer actuators under equal-biaxial forces, uniaxial forces, and uniaxial constraint of stiff fibers. Soft Matter, 8(22):6167, 2012. Hodgins_2016 M. Hodgins and S. Seelecke. Systematic experimental study of pure shear type dielectric elastomer membranes with different electrode and film thicknesses. Smart Materials and Structures, 25(9):095001, August 2016. astm1998standard ASTM E1050-98. Standard test method for impedance and absorption of acoustic materials using a tube two microphones and a digital frequency analysis system, 1998. bendat2011random J. S. Bendat and A. G. Piersol. Random data: analysis and measurement procedures, chapter 8. John Wiley & Sons, 2011.
http://arxiv.org/abs/2306.11032v1
20230619155732
$J/ψ$ production at NLO with a scale-dependent color-evaporation model
[ "B. Guiot", "A. Radic", "I. Schmidt", "K. Werner" ]
hep-ph
[ "hep-ph" ]
1]B. [email protected] 1]A. Radic 1]I. Schmidt 2]K. Werner [1]Departamento de Física, Universidad Técnica Federico Santa María; Casilla 110-V, Valparaiso, Chile [2]SUBATECH, University of Nantes - IN2P3/CNRS - IMT Atlantique, Nantes, France J/ψ production at NLO with a scale-dependent color-evaporation model [ ==================================================================== Nearly ten years ago, Kang, Ma, Qiu, and Sterman derived an evolution equation for a QQ̅ pair fragmenting into a quarkonium. In this study we explore the consequence of this evolution for the color-evaporation model, focusing on J/ψ transverse-momentum (p_t) distributions in proton-proton collisions. We show that, as expected, it softens the spectrum obtained by fixed-order calculations. While next-to-leading-order calculations strongly overestimate data at large p_t, ours, including the (approximate) QQ̅ evolution and next-to-leading-order cross sections computed with Madgraph, are in good agreement with experiments. Since our study with the color-evaporation model shows a significant effect of the QQ̅ evolution at large p_t, a determination of scale-dependent long-distance-matrix elements of non-relativistic QCD could be necessary. To describe data at small and intermediate p_t, we use the k_t-factorization approach, and we argue that quarkonia data could help constrain unintegrated parton densities. § INTRODUCTION Quarkonia production is important in Quantum Chromodynamics (QCD) because it provides insights into the fundamental dynamics of QCD, the structure of the QCD vacuum, the effects of heavy quark masses, the properties of the quark-gluon plasma, and allows for experimental tests of theoretical predictions. Models for quarkonia production, generally based on a factorization hypothesis, differ by their treatment of hadronization. A detailed comparison of the color-evaporation model (CEM) with the NRQCD factorization approach, and, in particular, of their non-perturbative functions, or long-distance-matrix elements (LDME), can be found in <cit.>. With its single LDME, related to the hadronization factor F_J/ψ, it is natural to think that the CEM is not fully consistent with QCD. However, its simplicity and reasonably good description of numerous observables make it a useful tool for the study of quarkonia production. An issue of the CEM is the too-hard spectrum due to next-leading order (NLO) contributions scaling as 1/p_t^4 <cit.>, where p_t is the J/ψ transverse momentum. An example of such contribution is shown in Fig. <ref>, along with the result of fixed-order NLO calculations obtained with madgraph5_aMC@NLO (MG5) <cit.> in Fig. <ref>. In this study, we demonstrate that the evolution of the heavy-quark pair derived in <cit.>, and shortly presented in Sec. <ref>, brings CEM calculations in agreement with data for the p_t-distribution of the J/ψ particle. This evolution, or scale dependence of the LDMEs, is a central prediction of perturbative QCD. Our first goal is not to support the CEM, but to underline the importance of the heavy-quark pair evolution. Of course, having the accuracy of this model improved is also of interest. The present study suggests that evolution could significantly affect the determination of NRQCD's LMDEs. We could reasonably hope that the extraction of scale-dependent LDMEs will alleviate the tensions between different LDME sets and improve the NRQCD description of experimental data. To describe data on the whole p_t range, we use a transverse-momentum dependent formalism (see Sec. <ref>). Our second goal is to use quarkonium data to put constraints on transverse-momentum-dependent PDFs. Our main results are presented in Sec. <ref>, and we discuss the universality of the non-perturbative parameter, F_J/ψ, in Sec. <ref>. Finally, we show similar results obtained with the event generator EPOS4 in Sec. <ref>. The common features of EPOS4 with our calculations are the timelike cascade, giving the (approximate) scale evolution of the QQ pair, and its hadronization based on the CEM. The similar results obtained with two different formalisms demonstrate the robustness of our main conclusion. § EVOLUTION OF THE HEAVY-QUARK PAIR Two groups published a series of papers discussing the scale evolution of quarkonia fragmentation functions. One of them worked with perturbative QCD <cit.>, while the other used the soft-collinear-effective theory <cit.>. Contributions to quarkonia production are first separated into leading power, scaling as 1/p_t^4, and next-to-leading power (NLP), scaling as 1/p_t^6, and then factorized. It is the factorization of NLP contribution that involves the convolution of a short-distance partonic cross section for the production of a QQ̅ pair with the modified double-parton fragmentation function of a QQ̅ pair into a quarkonium H. This fragmentation function obeys the following evolution equation ∂/∂lnμ^2 D_[Q Q̅(κ)] → H(z, μ^2)=α_s(μ)/2 π∑_n ∫_z^1 d z^'/z^' P_[Q Q̅(n)] →[Q Q̅(κ)](z/z^') × D_[Q Q̅(n)] → H(z^', μ^2 ), where κ and n denote the possible quantum numbers of the QQ̅ pair, and P_[Q Q̅(n)] →[Q Q̅(κ)] are the modified evolution kernels, see <cit.> for more details. At p_t ≫ m_Q, NLO diagrams with a gluon splitting into a QQ̅ pair (see Fig. <ref>) contribute to both LP and NLP <cit.>. Then, our calculations performed with MG5 include the NLP contributions, essential at small and intermediate p_t, and part of the LP contributions. Note that in the formalism of <cit.>, the LP short-distance cross sections are convoluted with D_f → H, whose evolution equation is ∂/∂lnμ^2 D_f → H(z, μ^2) =α_s(μ)/2 π∑_f^'∫_z^1 d z^'/z^' P_f → f^'(z/z^') D_f^'→ H(z^', μ^2) +α_s^2(μ)/μ^2∑_κ∫_z^1 d z^'/z^' P_f →[Q Q̅(κ)](z/z^') D_[Q Q̅(κ)] → H(z^', μ^2). The LP part of the CEM cross section should be related to the second term in the r.h.s of Eq. (<ref>), which describes the splitting of a single parton f into a QQ̅ pair followed by the scale-dependent fragmentation of the pair into the quarkonia H. Concerning our calculation with EPOS4, we will see that they include all LP contributions. In our study, we choose μ=p_t, and we see that at intermediate and large transverse momentum, the color-octet (CO) and color-singlet (CS) states mix under Eq. (<ref>). The different models for quarkonia production should be applied at μ_0∼ m_Q, and the CEM uses a δ(z-1) as z distribution <cit.>. Naively, we expect the evolution to reduce the differences between the different production mechanisms. Indeed, a QQ̅ pair in a CS state at μ_0 could have been produced in a CO state at μ=p_t. The mixing of quantum states induced by the evolution step Q Q̅(n) → Q Q̅(κ) implies a mixing of the LO behavior 1/p_t^a (a>0) associated to these states. The naive (LO) p_t dependence of the produced QQ̅(n) is further modified by real emissions, partly responsible for the evolution equation, and resulting in energy loss. It is in fact this effect which will bring the CEM calculations in agreement with data.[The mixing of states with different quantum numbers κ does not matter, since the CEM uses the same LDME for all κ.] Thanks to the simplicity of the CEM, the absence of evolution is easily observable, since, as demonstrated in Sec. <ref>, it is responsible for the overestimation of data by NLO calculations. This is another reason why we choose to work with the CEM. In the case of NRQCD, taking into account the evolution would certainly require a new determination of the LDMEs. The consequences of this could be modest for inclusive observables, and more visible for exclusive observables. § HEAVY-QUARK PAIR PRODUCTION, EVOLUTION AND HADRONIZATION In order to describe J/ψ production on the whole p_t range, we use a transverse-momentum dependent formalism. A possible choice would be the TMD factorization, proven for quarkonia production in proton-proton (pp) collisions using the soft-collinear and NRQCD effective field theories <cit.>. Another possibility is the k_t factorization approach <cit.>. This formalism has been extensively used for the study of J/ψ production in pp collisions <cit.>. In our case, we used a hybrid formalism, already used for Drell-Yan <cit.>, based on the k_t factorization, with the off-shell cross section replaced by the on-shell cross section. For hadron-hadron collisions, the differential cross section reads dσ(pp→ QQ)/dx_1dx_2d^2p_t(s,x_1,x_2,p_t)=∑_a,b∫^k_t,max^2∼ s_0 d^2k_1td^2k_2t F_a/h(x_1,k_1t;μ) × F_b/h(x_2,k_2t;μ)σ̂_ab→ QQ(x_1x_2s,q_t;μ), where q_t=p_t-k_1t-k_2t, with k_1t and k_2t the transverse momentum of the two incoming partons. The variables x_1,2 are the longitudinal momentum fractions carried by the partons, μ is the factorization scale, and s is the usual Mandelstam variable for the proton-proton system. The cross section σ̂_ab→ QQ is computed at NLO with MG5, option -p, which includes all necessary subtraction terms. Option -p means the parton showers are not performed within MG5, but with other tools, in our case, CASCADE3 <cit.> and pythia 6 <cit.>. We worked in a 3-flavor scheme with m_c=1.3 GeV, and used the CT14nlo_NF3 <cit.> PDFs. This choice is imposed by the fact that, in MG5, the variable-flavor-number scheme (VFNS) works with m_c=0 (i.e., it is a Zero-Mass VFNS). In <cit.>, the authors argue that, in practice, it is generally better[A case where this statement does not apply is heavy-quark production at leading order (LO). Here, VFNS PDFs with a 3-flavor scheme cross section strongly underestimate the data. This is because the partonic cross section for the cg→ cg process is numerically large. While it is already included at LO in a VFNS, it appears only at NLO in a 3-flavor scheme.] to use VFNS PDFs, even with cross sections computed in a 3-flavor scheme. We checked that changing the CT14nlo_NF3 for the CT14nlo gives similar results. With the different PDFs sets tested in our study, we observed a negligible or small variation of the final result compared to scale variation. The transverse-momentum dependence of the initial partons is managed with CASCADE3, and we used the parton-branching (PB) unintegrated PDFs (UPDFs), set 2 <cit.>. These UPDFs have been extracted from HERA data with the on-shell cross section computed with MG5 at NLO. They obey an evolution equation <cit.>, which can be combined with finite-order matrix elements either by matching <cit.>, or merging <cit.>. The transverse-momentum distribution of the J/ψ particle has already been studied with the event generator CASCADE using more traditional k_t-factorization calculations, e.g., with LO off-shell matrix elements and the color-singlet model, see, for instance, Ref. <cit.>. The heavy-quark pair evolution is included approximately with the pythia 6 <cit.> timelike cascade. We did not use the implementation of quarkonia production in pythia. We simply run the timelike cascade for all particles produced in the hard scattering, i.e., for Q, Q, and a light flavor in the case of NLO processes. This treatment of the QQ̅ evolution takes into account real-gluon-emission diagrams. The main physical mechanism is energy loss by the heavy-quark pair, shifting fixed-order calculations of Fig. <ref> to the left. The cascade usually starts at μ_c ≃μ≃ p_t, and we chose μ_c=μ by simplicity. To larger p_t corresponds a longer evolution and a larger shift in p_t, making the theoretical prediction softer. From a more theoretical point of view, we obtain a softer spectrum because the evolution modifies the FO behavior of 1/p_t^4 to <cit.> m_Q^2/p_t^4μ^2. In the last step, when p_QQ^2∼ m_J/ψ^2, we hadronize the QQ pair into a J/ψ using the CEM formula <cit.> dσ/dp_t=F_J/ψ∫_2m_c^2m_D^0dσ^QQ/dm_QQdp_tdm_QQ, with σ^QQ the cross section obtained after showering σ(pp→ QQ) given in Eq. (<ref>). We fixed the parameter F_J/ψ to the value found in Ref. <cit.> at NLO[𝒪(α_s^3) contributions are considered to be LO in <cit.> because the 𝒪(α_s^2) diagrams do not contribute to p_t>0 in collinear factorization. This is not the case when using the k_t-factorization, and 𝒪(α_s^3) contributions are NLO.]: F_J/ψ=0.014. In that sense, our calculations are parameter free. We did not use the improved CEM <cit.> dσ/d^3p=F_J/ψ∫_m_J/ψ^2m_D^0dσ^QQ/dm_QQdp^3_QQδ^3(p-m_J/ψ/m_QQp_QQ)dm_QQ, because the reduced phase space (m_J/ψ>2m_c) requires more statistics. Several studies using LO off-shell matrix elements and the CEM are also available <cit.>. § J/Ψ PRODUCTION AT SMALL, INTERMEDIATE AND LARGE TRANSVERSE MOMENTUM We start with ATLAS data <cit.> at 8 TeV, already compared to the CEM at NLO (fixed order) in Fig. <ref>, see also Fig. 33 of Ref. <cit.>. While fixed-order calculations fail, we observe in Fig. <ref> a good agreement between our full calculations (mg5+cascade3) and data. The results obtained for the other rapidity ranges, shown at the end of this manuscript, suggest that the disagreement observed in a few bins comes from statistical fluctuations. At large p_t, the main difference between Figs. <ref> and <ref> is the timelike cascade. As expected, including the scale-dependence of the CEM's LDME improved the result. It was less clear, however, that this sole effect would bring CEM calculations in agreement with data, this model being quite simple. But we should be careful to not overinterpret these results. Our calculations do not include the feed down contributions, and the timelike cascade is only an approximation of the true evolution equation. Still, improving these points will certainly not change our conclusion: The evolution of the QQ̅ pair modifies the shape of the p_t distribution, and this effect solve, or at least reduce significantly, the issue with the CEM at NLO. While taking into account the initial transverse momentum is not primordial at large p_t (with our formalism), this effect is essential at small p_t. This is illustrated in Fig. <ref>, where we plot the result obtained at √(s)=200 GeV, with the initial transverse momentum obtained from the PB set1 and set2, and with no initial transverse momentum in green. While both PB sets give a similar result, the green line corresponding to no initial transverse momentum is more than one order of magnitud below. Since the LO contribution is exactly zero in the case of no initial k_t, this line corresponds to the pure NLO contribution. We observe that this contribution starts to dominate at p_t∼ 10 GeV. In Figs. <ref> and <ref>, we compare our calculations at 7 TeV with LHCb <cit.>, ALICE <cit.>, and CMS <cit.> data. As discussed in more detail in section <ref>, we worked with the same value of the non-perturbative parameter F_J/ψ. Our central values underestimate data at p_t ≲ 5 GeV but are still in agreement within uncertainties. The theoretical uncertainties, shown as red and beige color bands, include the variation of factorization and/or renormalization scales, as well as statistical uncertainties. We obtained the red color band with the usual variation of the factorization scale between 0.5μ and 2μ, while the beige color band is built from the conventional 7-point variation. Both techniques give the same upper limit, but the 7-point variation gives a wider error band. In conclusion, once the scale-dependence of non-perturbative functions is taken into account, NLO calculations with the CEM can describe data on the whole p_t range. Again, our main goal is not to defend the CEM, but to show the effect of scale evolution in a simple case. However, we should recognize that the J/ψ inclusive p_t distribution cannot be used alone to reject this model. It could be interesting to go further, with less inclusive observables such as the z-distribution of a J/ψ within a jet. NRQCD calculations for this observables, based on the fragmentation contribution, are already available, see, for instance, Refs. <cit.>. § UNIVERSALITY OF F_J/Ψ In Figs. <ref> and <ref>, we compare our calculations to LHCb data at 13 TeV <cit.> and PHENIX data at 200 GeV <cit.>. We used the same value of the non-perturbative parameter, i.e., F_J/ψ=0.014, and obtained results similar to those of Sec. <ref>. We conclude on the universality of F_J/ψ, at least on this range of energies. On the opposite, using the k_t-factorization with LO off-shell matrix elements, Ref. <cit.> reaches a different conclusion, with F_J/ψ(200 GeV) about 10 times larger than at 7 TeV (except for the ALICE experiment, with a factor of 2.7). The formalisms used in <cit.> being quite different, it is hard to pinpoint the precise reason explaining this difference. However, concentrating on small and intermediate p_t, we can make the following comments. This kinematical region, being dominated by the LO contribution, it seems unlikely that the difference between Ref. <cit.> and the present work can be explained by the fact that we included the NLO contribution. Another difference is the use of an on-shell cross section instead of an off-shell one. However, these two quantities are supposed to be close at small k_t. Another possible explanation is the k_t dependence of the KMRW UPDFs <cit.>, already discussed in <cit.> in the context of D-meson production, see also <cit.>. These UPDFs, used in <cit.>, are constrained for k_t∈ [0,μ] by their relation to collinear PDFs f_k/h(x;μ)=∫_0^μ^2F_k/h(x,k_t;μ)dk_t^2. Here f_k/h gives the collinear distribution of a parton of flavor k in the hadron h. On the opposite, the cross section, Eq. (<ref>), is integrated up to k_t^2 ∼ s. At fixed μ, or equivalently fixed p_t, and increasing s, the unconstrained part of the UPDFs, k_t∈ [μ, √(s)], contributing to the cross section increases. To compensate that, we expect a decrease of F_J/ψ, which acts as a normalization factor. This decrease of F_J/ψ with energy is indeed observed in Ref. <cit.>. On the opposite, the PB UPDFs used in our study are constrained on the full phase space f_k/h(x;μ)=∫_0^∞F_k/h(x, k_t;μ)/πd^2 k_t, and the numerical value of these functions in the region [μ, √(s)] is negligible. Then, the k_t integral of Eq. (<ref>) is effectively cut off at k_t ∼μ, resulting in a energy independent F_J/ψ. Whatever the correct explanation is, it shows the potential of quarkonia data to constrain the (gluon) UPDFs. § COMPARISON WITH EPOS4 §.§ Short presentation of the formalism The EPOS4 project is an attempt to construct a realistic model for describing relativistic collisions of different systems, from proton-proton (pp) to nucleus-nucleus (AA), at energies from several TeV per nucleon down to several GeV. In the EPOS4 approach, we distinguish “primary scatterings” and “secondary scatterings”. The latter – referring to core-corona separation, hydrodynamic core evolution and its decay, and hadronic rescattering – are irrelevant for the present study. The former ones refer to parallel scatterings, with the initial nucleons (and their partonic constituents) being involved, happening instantaneously at very high energies. The theoretical tool is S-matrix theory, using a particular form of the proton-proton scattering S-matrix (Gribov-Regge (GR) approach <cit.>), which can be straightforwardly generalized to be used for nucleus-nucleus collisions. This S-matrix approach has a modular structure, and it is based on so-called “cut Pomerons”, representing elementary parton-parton scatterings, with the associated mathematical object G = 1/2s2 ImT. We define the Fourier transform of the T-matrix T(s,b) = 1/4π^2∫ d^2q_⊥ e^-iq_⊥.b T(s,t), with the Mandelstam variable t related to q_⊥. Although these basis assumptions are not only well motivated, but also very simple and transparent, the practical application is complicated. This is due to the fact that when developing matrix elements in terms of multiple scattering diagrams, the large majority of the diagrams cancel when it comes to inclusive cross sections. But EPOS4 keeps all contributions, and goes beyond inclusive cross sections. This is important, for instance, when studying high multiplicity events. The challenge for EPOS4 is to use the full parallel multiple scattering scenario, but in such a way that for inclusive cross section the cancellations actually work. This is the new part in EPOS4, strongly based on an interplay between parallel scatterings and saturation. This is discussed in a separate publication <cit.>. The main point is the way the cut Pomeron expression G is connected to G_QCD(Q^2, x^+,x^-,s,b), which is essentially a cut parton ladder based on DGLAP parton evolutions <cit.> from the projectile and target side, with an elementary QCD cross section in the middle. Here, Q^2 is the low-scale cutoff in the DGLAP evolution, and x^± are ligth-cone fractions of the projectile and target momentum. The cutoff is usually taken to be constant and of the order of 1GeV, whereas we allow any value, see <cit.> for details. We then postulate G(x^+,x^-)∝{ R_deform^(N_conn)(x^+,x^-)} ^-1× G_QCD(Q_sat^2,x^+,x^-), where R_deform^(N_conn) is the deformation of the distributions of these variables in case of a certain number N_conn of other Pomerons being connected to the same projectile and target nucleon (and competing for energy sharing). This deformation destroys factorization and binary scaling for high p_t inclusive particle production, unless one uses Eq. (<ref>), such that G does not depend on N_conn, but the saturation scale Q_sat^2 does. In that case, the full multiple scattering approach, done via Monte Carlo, agrees with the factorization formula for inclusive observables, as it should.[To be more precise, the collinear factorization and EPOS4 agree at large transverse momentum of the final state particle. For transverse momenta p_t∼ Q_sat and smaller, the effect of saturation implemented in EPOS amount to a reduction of the spectrum.] Finally, partons emitted during the initial DGLAP evolution and those produced in the 2→ 2 hard scattering undergo another DGLAP evolution, the timelike cascade, central for the present study. In general, we have multiple parton ladder configurations, where each one may look like those sketched in Fig. <ref>, where we indicate the different ways to produce heavy quarks (Q) and antiquarks (Q̅). In case of charm, to compute the J/ψ yield, we sum over all c-c̅ pairs, compute their masses m=√(p(c)· p(c̅)), and count them with probability F_J/ψ , in case of m_J/ψ<m<2m_D. §.§ Results The two main similarities between EPOS4 and our mg5+cascade3 calculations are the use of a timelike cascade, and the hadronization of the produced QQ̅ pairs based on the CEM. Fig. <ref> shows that EPOS4 results for ATLAS data are qualitatively the same as mg5+cascade3 calculations. There are, however, several relevant differences between the two formalisms: * EPOS4 uses LO cross sections, and the madgraph NLO contribution is taken into account by the timelike cascade. * It includes all LP contributions. Indeed, EPOS4 produces any kind of partons which evolve with the timelike cascade, and can split into a QQ̅ pair at any time. * EPOS4 works with a GM-variable-flavor scheme, with m_c = 1.27 GeV. It implies that contributions similar to graph (b) of Fig. <ref> are included, while absent for mg5+cascade3. * The details of the implementation of the timelike cascade may be different. * F_J/ψ=0.028. Despite these differences, both formalisms show similar results, in relatively good agreement with data, and far from the result obtained with NLO fixed-order calculations, Fig. <ref>. The key ingredient is the (perturbative) scale-dependence of the non-perturbative parameter, or equivalently, the QQ̅ evolution. Still, one may be surprised by the fact that both calculations give similar results for ATLAS data, where the LP contributions dominate. While EPOS4 allows a produced gluon to emit, in principle, any number n of partons before splitting into the heavy-quark pair, our calculations with MG5 include only the case n=0. The explanation is probably that the produced QQ̅ pair keeps emitting partons, mainly gluons. Then, the global picture is that a color object propagates, emitting partons, and we find a heavy-quark pair at the end of the process. Qualitatively, it does not really matter if the partons are emitted by a gluon or the heavy-quark pair[Of course, the rate of emissions is not exactly the same.]. The final result is energy loss making the spectrum softer. Finally, we remark that the probability for the propagating gluon to emit a large number n of gluons before producing the QQ̅ pair is suppressed because the virtuality Q^2 of timelike partons decreases after each emission. Creating a QQ̅ pair requires at least a virtuality of Q^2_min=4m_Q^2, limiting the number of emissions before its production. § COMPLEMENTARY FIGURES: COMPARISON WITH ATLAS DATA In this section, all figures are the same as Fig. <ref>, but for different rapidity ranges. § CONCLUSION We have shown that the evolution of the QQ̅ pair predicted by pQCD, and in particular real emissions, solves the issue of NLO calculations with the CEM. However, one should keep in mind that our calculations used several approximations discussed in Sec. <ref>. Having the CEM back in agreement with inclusive J/ψ data does not imply the CEM is correct from the point of view of QCD. However, this model is considered to be a useful tool, and our calculations improved this point. We expect that the evolution could also have a significant impact on NRQCD LDMEs determination, at least in some kinematical range, and on cross-section calculations. In Sec. <ref>, we showed that with the value F_J/ψ=0.014, we could describe data from √(s)=13 TeV to √(s)=200 GeV with satisfying accuracy. In other words, F_J/ψ seems to be universal, as expected. Other calculations based on k_t-factorization did not reach this conclusion, and we speculated on the possible explanations. One of these explanations involves the definition of UPDFs, and we believe that the dependence of F_J/ψ with energy has the potential to constrain these functions. § ACKNOWLEDGMENTS BG and IS acknowledge support from ANID PIA/APOYO AFB220004. IS is supported by Fondecyt project 1230391. We would like to thank H. Jung for his help with CASCADE3. We also thank H. Jung and F. Hautmann for interesting discussions. BibFiles/t1
http://arxiv.org/abs/2306.02461v1
20230604203018
The Semi-implicit DLN Algorithm for the Navier Stokes Equations
[ "Wenlong Pei" ]
math.NA
[ "math.NA", "cs.NA" ]
The Semi-implicit DLN Algorithm for the Navier Stokes Equations Wenlong Pei Department of Mathematics, The Ohio State University, Columbus, OH 43210, USA. Email: mailto:[email protected]@osu.edu. =============================================================================================================================================================== Dahlquist, Liniger, and Nevanlinna design a family of one-leg, two-step methods (the DLN method) that is second order, A- and G- stable for arbitrary, non-uniform time steps. Recently, the implementation of the DLN method can be simplified by the refactorization process (adding time filters on backward Euler scheme). Due to these fine properties, the DLN method has strong potential for the numerical simulation of time-dependent fluid models. In the report, we propose a semi-implicit DLN algorithm for the Navier Stokes equations (avoiding non-linear solver at each time step) and prove the unconditional, long-term stability and second-order convergence with the moderate time step restriction. Moreover, the adaptive DLN algorithms by the required error or numerical dissipation criterion are presented to balance the accuracy and computational cost. Numerical tests will be given to support the main conclusions. Refactorization, G-stability, second-order, time adaptivity, semi-implicit 65M12, 35Q30, 76D05 § INTRODUCTION In the simulation of time-dependent fluid models, various time-stepping schemes have been constructed based on stability and consistency. The backward Euler method, unconditionally stable and easily implemented, can only have first-order accuracy <cit.>. The trapezoidal rule or two-step backward difference method (BDF2) are both second-order accurate and widely used in computational fluid dynamics <cit.>. However, the trapezoid rule with some unfavorable combinations of time steps leads to instability of the numerical solutions <cit.>. The variable-step BDF2 method only has conditional stability if the time step ratio is small enough <cit.>. Dahlquist, Liniger, and Nevanlinna propose a one-parameter family of one-leg, two-step method (thus the DLN method herein) which is G-stable (non-linear stable) <cit.> and second-order accurate under arbitrary time grids. To our knowledge, the DLN method is the only time-stepping algorithm possessing these two properties under arbitrary time step sequence. Hence its essential properties of stability and consistency have been carefully studied and explored in <cit.>. Recently the variable step DLN method has been applied to the unsteady Stoke/Darcy model and Navier Stokes equations (NSE) and performs well in specific test problems <cit.>. Given the initial value problem: y'(t) = f(t,y(t)), 0 ≤ t ≤ T, y(0) = y_0, for y: [0,T] →ℝ^d, f: [0,T] ×ℝ^d→ℝ^d and y_0∈ℝ^d. The family of one-leg, two-step DLN method (with parameter θ∈ [0,1]), applying to (<ref>) is written DLN∑_ℓ =0^2α _ℓy_n-1+ℓ = k_n f ( ∑_ℓ =0^2β _ℓ^(n)t_n-1+ℓ , ∑_ℓ =0^2β _ℓ^(n)y_n-1+ℓ) , n=1,…,N-1. Here { 0 = t_0 < t_1 < ⋯ < t_N-1 <t_N=T }_n=0^N are the time grids on interval [0,T] and y_n is the DLN solution to y(t_n). The coefficients in (<ref>) are [ α _2; α _1; α _0 ] = [ 1/2(θ +1); -θ; 1/2(θ -1) ], [ β _2^(n); β _1^(n); β _0^(n) ] = [ 1/4(1+1-θ^2/(1+ε _nθ)^2+ε _n^2θ (1-θ^2)/(1+ε _nθ)^2+θ); 1/2(1-1-θ^2/(1+ε _nθ)^2); 1/4(1+1-θ^2/(1+ε _nθ)^2-ε _n^2θ (1-θ^2)/(1+ε _nθ)^2-θ) ]. The step variability ε _n = (k_n - k_n-1)/(k_n + k_n-1) is the function of two step size. k_n = α _2k_n-α _0k_n-1 is the average time step. Given sequence { z_n}_n=0^∞, we denote z_n,β := β _2^(n) z_n+1 + β _1^(n) z_n + β _0^(n) z_n-1 = ∑_ℓ =0^2β _ℓ^(n) z_n-1+ℓ, for convenience in the remaining paragraphs. Then the DLN method in (<ref>) can be shorten ∑_ℓ =0^2α _ℓy_n-1+ℓ = k_n f ( t_n,β , y_n,β ). Herein we propose the variable-step, semi-implicit DLN algorithm for NSE and present a detailed numerical analysis of stability and convergence. Let the open, connected and bounded set Ω⊂ℝ^d(d=2 or 3) be the domain, the fluid velocity u(x,t), pressure p(x,t) and the source f(x,t) in the NSE are governed by the following system u_t + u ·∇ u - νΔ u + ∇ p = f, x ∈Ω, 0 < t ≤ T, ∇· u = 0, x ∈Ω, 0 < t ≤ T, u(x,0) = u_0(x), 0 < t ≤ T, u = 0 on ∂Ω, ∫_Ω p dx = 0, 0 < t ≤ T. Let u_n^h and p_n^h are the numerical solutions to velocity u(x,t_n) and pressure p(x,t_n) respectively on certain finite element space with diameter h, the fully-implicit DLN algorithm for NSE approximate the non-linear term u ·∇ u at t_n+1 by u_n,β^h·∇ u_n,β^h <cit.>, which results in two main disadvantages of the algorithm: i. the rigorous time step restriction like Δ t ≤𝒪(ν^3) for convergence, ii. the non-linear solver in each time step computation. The above time step restriction in error analysis arises from the use of the discrete Grönwall inequality <cit.> and would be very strict even under moderate viscosity value (like ν = 1.e-2)[To our knowledge, the restriction can not be avoided as long as the fully-implicit, time-stepping methods are applied.]. Fixed point iteration and Newton's iteration are two common choices for non-linear solvers. Fixed point iteration is easily implemented while Newton's iteration possesses fast convergence. However, they usually cost more than solving a linear system and have the risk of divergence if the initial value for the iteration is poorly guessed. To address the two issues, we extend Baker's idea <cit.> and propose the semi-implicit DLN scheme for NSE. The essence of the idea is to extrapolate the first u_n,β^h in the non-linear term u_n,β^h·∇ u_n,β^h by its second-order extrapolation in time (the linear combination of u_n^h and u_n-1^h). In return, the non-linear solver at each time step is replaced by a linear system. Meanwhile, the strict time step restriction for convergence can be released. In addition, we simplify the DLN implementation by the refactorization process (pre- and post-process on backward Euler scheme) and improve the computational efficiency by the corresponding time adaptivity algorithms. The paper is organized as follows. Necessary notations and preliminaries are given in Section <ref>. In section <ref>, we propose the semi-implicit DLN algorithm for NSE and its equivalent implementation by the refactorizaion process. In section <ref>, we will show that the numerical solution is long-term, unconditionally stable. The variable step error analysis with the loose time step restriction is given in Section <ref>. The time adaptive algorithms (using error or numerical dissipation criterion) in Section <ref> are provided to reduce the computational cost. In Section <ref>, the Taylor-Green benchmark problem <cit.> is to confirm the second-order convergence. The unconditional stability of the variable step DLN and the advantage of time adaptivity are verified in the revised Taylor-Green problem and the 2D offset problem <cit.>. §.§ Related Work Semi-implicit schemes are effective ways to simulate non-linear, time-dependent fluid models. Baker studied the semi-implicit Crank-Nicolson method and applied it to NSE <cit.> early. Baker, Dougalis, and Karakashian increase the accuracy of numerical solutions by use of the three-step backward difference method coupled with extrapolation for non-linear terms <cit.>. Girault and Raviart prove the convergence of general linear two-step semi-implicit algorithms for NSE <cit.>. They solve the two-dimensional NSE by a fully discrete two-level finite element method: Crank-Nicolson extrapolation scheme on spatial-time coarse grids and backward Euler scheme on fine grids <cit.>. Labovsky, Layton, Manica, Neda, and Rebholz add artificial viscosity stabilization to extrapolated trapezoidal finite-element method for NSE, leading to a more easily solvable linear system at each time step <cit.>. Ingram offers a detailed numerical analysis of the semi-implicit Crank-Nicolson scheme for NSE and proves the convergence of both velocity and pressure <cit.>. Time adaptivity based on certain criteria (required local truncation error, minimum numerical dissipation, etc.) is an optimal choice to treat the conflicts between time accuracy and efficiency. Inspired by the pioneering work of Gear <cit.>, Kay, Gresho, Griffiths, and Silvester implement the trapezoidal scheme for NSE in a memory efficient way and estimate the error in time by the explicit two-step Adams Bashforth method (AB2 method) in the time adaptivity <cit.>. John and Rang propose diagonal-implicit RK methods (DIRK methods) with an embedding lower-order scheme for adaptivity and implement this algorithm for 2D laminar flow around a cylinder <cit.>. Guzel and Layton add the time filter to the backward Euler method to increase the accuracy of numerical solutions <cit.>. The corresponding BE-filter adaptivity is applied to various fluid models <cit.>. Recently Bukač, Burkardt, Seboldt, and Trenchea refactorize the midpoint rule and adjust time steps by the revised AB2 method in fluid-structure interaction problems <cit.>. The numerical dissipation criterion for adaptivity, proposed by Capuano, Sanderse, De Angelis, and Coppola in <cit.>, has been tested in the DLN simulations of the NSE and the coupled Stokes-Darcy model <cit.>. § PRELIMINARIES Let Ω⊂ℝ^d (d = 2 or 3) be the domain. For 1 ≤ p < ∞, L^p(Ω) is the normed linear space containing Lebesgue measurable function f such that |f|^p is integragle. For r ∈ℕ, the Sobolev space W^r,p(Ω) with usual norm ·_W^r,p contains all Lebesgue measurable functions whose weak derivatives up to r-th belong to L^p(Ω) and W^0,p(Ω) is the usually L^p(Ω) space. · and (·, ·) denote the L^2-norm and L^2-inner product. The space H^r with norm ·_r and semi-norm | · |_r denotes the Sobolev space with p=2. The velocity space X and the pressue space Q for the NSE in (<ref>) are X := (H_0^1(Ω) )^d = { v ∈(H^1(Ω) )^d: v = 0 on ∂Ω}, Q := { q ∈ L^2(Ω): (q,1) = 0 }. The divergence-free space for the velocity is V := { v ∈ X: ∇· v = 0 }. For any function v ∈ X ∩( H^r)^d, the norm ·_r and semi-norm | · |_r are define v _r := ( ∑_ℓ=1^d v_i_r^2)^1/2, | v |_r := ( ∑_ℓ=1^d | v_i |_r^2)^1/2. X' is the dual space of X with the dual norm f _-1 := sup_v ∈ X, v ≠ 0(f,v)/∇ v , ∀ f ∈ X'. We need the Bochner space on the time interval [0,T] L^p(0,T;(H^r )^d) = { f(·,t) ∈(H^r )^d: f _p,r = ( ∫_0^T f(·,t) _r^p dt )^1/p < ∞}, L^∞( 0,T;(H^r )^d) = { f(·,t) ∈(H^r )^d: f_∞,r = sup_0 < t <Tf(·, t) _r < ∞}, L^p(0,T;X') = { f(·,t) ∈ X' : f _p,-1 = ( ∫_0^T f(·,t) _-1^p dt )^1/p < ∞}, and the discrete Bochner space with the time grids {t_n}_n=0^N on the time interval [0,T] ℓ^∞( 0,N;(H^r )^d) = { f(·,t) ∈(H^r )^d: |f| _∞,r < ∞}, ℓ^∞,β( 0,N;(H^r )^d) = { f(·,t) ∈(H^r )^d: |f| _∞,r,β < ∞}, ℓ^p,β( 0,N;(H^r )^d) = { f(·,t) ∈(H^r )^d: |f| _p,r,β < ∞}, ℓ^p,β( 0,N; X' ) = { f(·,t) ∈ X': |f| _p,-1,β < ∞}, where the corresponding discrete norms are |f| _∞,r := max_0 ≤ n ≤ N f(·, t_n) _r, |f| _∞,r,β := max_1 ≤ n ≤ N-1 f(·, t_n,β) _r |f| _p,r,β := ( ∑_n=1^N-1 (k_n + k_n-1) f(·,t_n,β) _r^p)^1/p. |f| _p,-1,β := ( ∑_n=1^N-1 (k_n + k_n-1) f(·,t_n,β) _-1^p)^1/p The discrete norm | · | _p,r,β in (<ref>) is the form of Riemann sum in which the function f is evaluated at t_n,β∈ [t_n-1,t_n+1]. For any u,v,w ∈(H^1(Ω) )^d, we define the skew-symmetric, non-linear operator b(u,v,w) := 1/2 (u ·∇ v, w) - 1/2 (u ·∇ w, v). b(u,v,w) = (u ·∇ v, w) + 1/2( (∇· u) v, w ) - 1/2∫_∂Ω (u ·n ) v · w dS, where n is the outward unit normal vector of ∂Ω. We apply divergence theorem and integration to (<ref>) b(u,v,w) = (u ·∇ v, w) + 1/2( (∇· u) v, w ), for any u,v,w ∈ X. Thus if u ∈ V, b(u,v,w) = (u ·∇ v, w). We have the following lemma about the bounds of the operator b. For any u,v,w ∈ H^1, |(u·∇ v, w)| ≤ C { u _1 |v|_1 w _1 u v _2 w _1 ∀ v ∈ H^2 u _2 |v|_1 w ∀ u ∈ H^2. , |b(u,v,w)| ≤ C ( u u _1)^1/2 v _1 w _1, Moreover, if u,v,w ∈ X, |b(u,v,w)| ≤ C { u _1 v _1( w w _1)^1/2 u _1 v _2 w ∀ v ∈ H^2. . See <cit.>. For spatial discretization, X^h⊂ X and Q^h⊂ Q are certain finite element spaces for velocity and pressure respectively based on the edge-to-edge triangulation of domain Ω (with the maximum diameter of the triangles h>0). X_h and Q_h satisfy the discrete inf-sup condition, i.e. inf_q^h∈ Q^hsup_v^h∈ X^h(∇· v^h, q^h)/∇ v^h q^h≥ C, for some positive constant C. Typical examples of such finite element spaces are Taylor-Hood element spaces and Mini element spaces. The approximation theorem for X^h and Q^h is inf_v^h∈ X^h u - v^h_1≤ Ch^r u _r+1, r ≥ 0, u ∈ (H^r+1)^d∩ X, inf_q^h∈ Q^h p - q^h≤ Ch^s+1 p _s+1, s ≥ 0, p ∈ H^s+1∩ Q, where r and s are the polynomial degrees of X^h and Q^h respectively. The inverse inequality for X^h is |v^h|_1≤ Ch^-1v^h, ∀ v^h∈ X^h. The discrete divergence-free space is V^h := { v^h∈ X^h: (q^h, ∇· v^h) = 0, ∀ q^h∈ Q^h}. For any pair (u,p) ∈ V × Q, the Stokes projection (P_S^(u) u, P_S^(p)p) ∈ V^h× Q^h is defined as the solution to the problem ν( ∇ u, ∇ v^h) - (p, ∇· v^h) = ν( ∇ P_S^(u)u , ∇ v^h) - (P_S^(p)p, ∇· v^h), (q^h, ∇· P_S^(u)u) = 0, ∀ (v^h,q^h) ∈ X^h× Q^h. The Stokes projection has the following approximations (see <cit.> for proof) u - P_S^(u)u ≤ C h ( ν^-1inf_q^h∈ Q^hp - q^h + inf_v^h∈ X^h | u - v^h |_1), u - P_S^(u)u _1≤ C ( ν^-1inf_q^h∈ Q^hp - q^h + inf_v^h∈ X^h | u - v^h |_1). § THE ALGORITHM Let u_n^h∈ X^h and p_n^h∈ Q^h be the numerical solutions of velocity u(x,t_n) and pressure p(x,t_n) respectively. Then the semi-implicit DLN algorithm for the NSE in (<ref>) is: given two previous solutions u_n^h, u_n-1^h∈ X^h, p_n^h, p_n-1^h∈ Q^h, find u_n+1^h∈ X^h and p_n+1^h∈ Q^h such that for all (v^h,q^h) ∈ X^h× Q^h (α_2 u_n+1^h+α_1 u_n^h+α_0 u_n-1^h/k_n,v^h) + b ( u_n^h, u_n,β^h, v^h) +ν( ∇ u_n,β^h,∇ v^h) -( p_n,β^h,∇·v^h) =( f_n,β, v^h) , ( ∇· u_n,β^h, q^h) = 0, where the second-order, linear extrapolation u_n^h for u_n,β^h is u_n^h = β _2^(n)[(1 + k_n/k_n-1) u_n^h - k_n/k_n-1 u_n-1^h] + β _1^(n) u_n^h + β _0^(n) u_n-1^h≈ u_n,β^h. The above semi-implicit DLN algorithm in (<ref>) can be simplified by the following refactorizaion process (See <cit.> for the proof of equivalence) Step 1. Pre-possess: u_n^h,old = a_1^(n) u_n^h + a_0^(n) u_n-1^h, k_n^BE = b^(n)k_n, u_n^h = β _2^(n)[(1 + k_n/k_n-1) u_n^h - k_n/k_n-1 u_n-1^h] + β _1^(n) u_n^h + β _0^(n) u_n-1^h, Step 2. Semi-implicit backward Euler solver: solve for u_n+1^h,temp and p_n+1^h,temp (u_n+1^h,temp- u_n^h,old/k_n^BE, v^h) + b (u_n^h, u_n+1^h,temp, v^h) +ν(∇ u_n+1^h,temp, ∇ v^h) -( p_n+1^h,temp, ∇· v^h) = ( f_n,β, v^h), ( ∇· u_n+1^h,temp, q^h ) = 0, Step 3. Post-possess: u_n+1^h = c_2^(n) u_n+1^h,temp + c_1^(n) u_n^h + c_0^(n) u_n-1^h, p_n+1^h = c_2^(n) p_n+1^h,temp + c_1^(n) p_n^h + c_0^(n) p_n-1^h, where the coefficents in the refactorizaion process are a_1^(n) = β_1^(n) - α_1β_2^(n) / α_2, a_0^(n) = β_0^(n) - α_0β_2^(n) / α_2 , b^(n) = β_2^(n) / α_2 , c_2^(n) = 1 / β_2^(n), c_1^(n) = - β_1^(n) / β_2^(n) , c_0^(n) = - β_0^(n) / β_2^(n) . § NUMERICAL ANALYSIS For numerical analysis, we need the following two lemma about the stability and consistency of the DLN method. Let Y be the inner product space over ℝ with the inner product (·, ·)_Y and the induced norm ·_Y. For any sequence { y_n}_n=0^N in Y, θ∈ [0,1] and n ∈{ 1,2, ⋯, N-1 } (∑_ℓ=0^2α_ℓy_n-1+ℓ,∑_ℓ=0^2β_ℓ^(n)y_n-1+ℓ)_Y=y_n+1 y_n _G(θ)^2-y_n y_n-1 _G(θ)^2+∑_ℓ=0^2γ_ℓ^(n)y_n-1+ℓ_Y^2, where the ·_G(θ)-norm is u v_G(θ)^2 = 1/4 (1+θ) u _Y^2 + 1/4 (1 - θ ) v _Y^2 ∀ u,v∈ Y. and the coefficents γ_ℓ^(n) (ℓ =0,1,2) are γ_1^(n)=-√(θ( 1-θ^2) )/√(2) (1+ε _nθ ), γ_2^(n)=-1-ε _n/2γ_1^(n), γ_0^(n)=-1+ε _n/2γ_1^(n), By the above identity in (<ref>), the whole family of variable step, one-leg (<ref>) methods are G-stable (see <cit.> for the definition). The proof of identity in (<ref>) (implicit in <cit.>) is an algebraic calculation. Given Banach space Y with the norm ·_Y, time grids { t_n}_n=0^N on time interval [0,T] and the mapping u: [0,T] → Y, u_n denotes u(t_n) and u_n represents second-order, linear extrapolation of u_n,β in time, i.e. u_n = β _2^(n)[(1 + k_n/k_n-1) u_n - k_n/k_n-1 u_n-1] + β _1^(n) u_n + β _0^(n) u_n-1. If the mapping u(t) is smooth enough about variable t, then for any θ∈ [0,1] u_n,β - u ( t_n,β ) _Y^2≤ C(θ) ( k_n + k_n-1 )^3∫_t_n-1^t_n+1 u_tt_Y^2 dt, u_n - u ( t_n,β ) _Y^2≤ C(θ) ( k_n + k_n-1 )^3∫_t_n-1^t_n+1 u_tt_Y^2 dt, Moreover, if there exists constants C_L,C_U>0 such that the ratio of time steps satisfies 0 < C_L≤k_n/k_n-1≤ C_U, then 1/k_n∑_ℓ=0^2α_ℓ u_n-1+ℓ - u_t ( t_n,β ) _Y^2≤ C ( θ ) ( k_n + k_n-1 )^3∫_t_n-1^t_n+1 u_ttt_Y^2 dt. Using Taylor theorem and expanding u_n+1, u_n and u_n-1 at time t_n,β. §.§ Stability Analysis If the body force f in NSE satisfies f ∈ L^2(0,T;X') ∩ℓ^2,β(0,N;X'), the semi-implicit DLN algorithm for NSE in (<ref>) satisfies the following unconditional, long-time energy bounds: for any integer N>1, 1/4 (1+θ) u_N^h^2 + 1/4(1-θ) u_N-1^h^2 +∑_n=1^N-1∑_ℓ =0^2γ_ℓ^(n) u_n-1+ℓ^h^2 +ν/2∑_n=1^N-1k_n∇u_n,β^h^2 ≤C(θ)/ν( |f| _2,-1,β^2 + f _2,-1^2) + 1/4(1+θ) u_1^h^2 + 1/4(1-θ) u_0^h^2, where {γ_i^(n)}_i=0,1,2 are defined in (<ref>). We set v^h=u_n,β^h in (<ref>) and use the skew-symmetry property of the operator b 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ^h , u_n,β^h) + ν∇ u_n,β^h^2 = ( f_n,β , u_n,β^h ) . We apply (<ref>) and Young's inequality to ( f_n,β , u_n,β^h ) ( f_n,β ,∇ u_n,β^h) ≤ f_n,β_-1∇ u_n,β^h ≤ν/2∇ u_n,β^h^2 + 1/2 ν f_n,β_-1^2 ≤ν/2∇ u_n,β^h^2 + 1/ν f(t_n,β) _-1^2 + 1/ν f_n,β - f(t_n,β) _-1^2. (∑_ℓ=0^2α_ℓu_n-1+ℓ^h ,u_n,β^h) +ν/2k_n ∇ u_n,β^h^2≤k_n/ν f(t_n,β) _-1^2+k_n/ν f_n,β- f(t_n,β) _-1^2 . f_n,β - f(t_n,β) _-1^2≤ C(θ) ( k_n + k_n-1 )^3∫_t_n-1^t_n+1 f_tt_-1^2 dt Then we use the G-stability identity in (<ref>) and Lemma <ref> u_n+1^h u_n^h _G(θ)^2 - u_n^h u_n-1^h _G(θ)^2 + ∑_ℓ=0^2γ_ℓ^(n) u_n-1+ℓ^h^2 + ν/2k_n∇ u_n,β^h^2 ≤C(θ)/ν ( k_n + k_n-1 ) f ( t_n,β) _-1^2 + C(θ)/ν ( k_n + k_n-1 )^4∫_t_n-1^t_n+1 f_tt_-1^2 dt. By the definition of G(θ)-norm in (<ref>) and the notations in (<ref>), we sum (<ref>) over n from 1 to N-1 to yield (<ref>). We define the numerical dissipation and viscosity dissipation of the semi-implicit DLN algorithm in (<ref>) at time t_n+1 Numerical dissipation ℰ_n+1^ ND := 1/k_n∑_ℓ=0^2γ_ℓ^(n) u_n-1+ℓ^h^2, Viscosity dissipation ℰ_n+1^ VD := ν∇ u_n,β^h^2. §.§ Error Analysis Let r and s be the polynomial degree of X^h and Q^h respectively and k_max = max_0 ≤ n ≤ N k_n. Let u_n and p_n be the exact velocity and pressure of the NSE in (<ref>) at time t_n. We need the following upper and lower bound for the ratio of time steps: there are positive constants C_L and C_U such that 0 < C_L≤k_n/k_n-1≤ C_U, ∀ n. Suppose the velocity u ∈ X, the pressure p ∈ Q and the body force f(x,t) of the NSE in (<ref>) satisfy u ∈ℓ^∞(0,N;H^r) ∩ℓ^∞(0,N;H^1) ∩ℓ^∞,β(0,N;H^1) ∩ℓ^2,β(0,N;H^r+1∩ H^2), u_t∈ L^2(0,T;H^r+1), u_tt∈ L^2(0,T;H^r+1∩ H^1), u_ttt∈ L^2(0,T;X'), p ∈ℓ^2,β(0,N;H^s+1), f ∈ L^2(0,T;X'). Under the time step bounds in (<ref>), the numerical solutions of the semi-implicit DLN scheme in (<ref>) satisfy max_0 ≤ n ≤ N u_n - u_n^h + ( ν∑_n=1^N-1k_n∇( u_n - u_n^h) ^2)^1/2≤𝒪( h^r,h^s+1, k_max^2). The exact solutions of NSE at time t_n,β satisfy: ( u_t(t_n,β), v^h) + b ( u(t_n,β), u(t_n,β), v^h) + ν(∇ u(t_n,β),v^h) - (p(t_n,β),∇· v^h) = (f(t_n,β),v^h). 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ, v^h) + b ( u_n, u_n,β, v^h) + ν( ∇ u_n,β, v^h) - ( p(t_n,β), ∇· v^h) = ( f_n,β, v^h) + τ_n(v^h), ∀ v^h∈ V^h where u_n is second-order, linear extrapolation of u_n,β in (<ref>) and the truncation error τ_n is τ_n(v^h) = ( 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ) - u(t_n,β), v^h) + ν( ∇ (u_n,β - u(t_n,β)), v^h) + b ( u_n, u_n,β, v^h) - b ( u(t_n,β), u(t_n,β), v^h) + (f(t_n,β) - f_n,β,v^h). Let P_S^(u)u_n be velocity component of the Stokes projection of (u_n,0) onto V^h× Q^h. We set ϕ_n^h = u_n^h - P_S^(u)u_n, η_n = u_n - P_S^(u)u_n, and decompose the error of velocity e_n^u to be e_n^u = u_n^h - u_n = ϕ_n^h - η_n. We restrict v^h∈ V^h in (<ref>) and subtract (<ref>) from the first equation of (<ref>) 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ^h, v^h) - 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ, v^h) + b ( u_n^h, u_n,β^h, v^h) - b ( u_n, u_n,β, v^h) + ν( ∇ u_n,β^h, ∇ v^h) - ν( ∇ u_n,β, ∇ v^h) = - (p(t_n,β) - q^h, ∇· v^h) - τ_n(v^h) 1/k_n( ∑_ℓ=0^2α_ℓϕ_n-1+ℓ^h, v^h) + ν( ∇ϕ_n,β^h, ∇ v^h) = 1/k_n( ∑_ℓ=0^2α_ℓη_n-1+ℓ, v^h) + ν( ∇η_n,β, ∇ v^h) + b ( u_n, u_n,β, v^h) - b ( u_n^h, u_n,β^h, v^h) - (p(t_n,β) - q^h, ∇· v^h) - τ_n(v^h), ∀ q^h∈ Q^h We set v^h = ϕ_n,β^h in (<ref>) and use the G-stability identity in (<ref>), 1/k_n(ϕ_n+1^h ϕ_n^h _G(θ)^2 - ϕ_n^h ϕ_n-1^h _G(θ)^2 + ∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h ^2) + ν∇ϕ_n,β^h^2 = 1/k_n( ∑_ℓ=0^2α_ℓη_n-1+ℓ, ϕ_n,β^h) + ν( ∇η_n,β, ∇ϕ_n,β^h) + b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) - (p(t_n,β) - q^h, ∇·ϕ_n,β^h) - τ_n(ϕ_n,β^h). By Cauchy-Schwarz inequality, Poincaré inequality, Young's equality, (<ref>) and (<ref>) (inf_q^h∈ Q^h p - q^h vanishes since p = 0 in the Stokes projection) 1/k_n( ∑_ℓ=0^2α_ℓη_n-1+ℓ, ϕ_n,β^h) ≤1/k_n∑_ℓ=0^2α_ℓη_n-1+ℓϕ_n,β^h≤C/k_n∑_ℓ=0^2α_ℓη_n-1+ℓ∇ϕ_n,β^h 1/k_n(∑_ℓ=0^2α_ℓη_n-1+ℓ, ϕ_n,β^h) ≤ C/νk^2_n∑_ℓ=0^2α_ℓη_n-1+ℓ^2 + ν/16∇ϕ_n,β^h^2 ≤ C h^2r+2/νk^2_n∑_ℓ=0^2α_ℓ u_n-1+ℓ_r+1^2 + ν/16∇ϕ_n,β^h^2 ≤ C(θ) h^2r+2/νk^2_n( u_n+1- u_n_r+1^2+u_n+1- u_n-1_r+1^2) +ν/16∇ϕ_n,β^h^2. By Holder's inequality ∑_ℓ=0^2α_ℓ u_n-1+ℓ_r+1^2 = -α_1 (u_n+1 - u_n) - α_0(u_n+1 - u_n-1) _r+1^2 ≤ C(θ) ( u_n+1 - u_n_r+1^2 + u_n+1 - u_n-1_r+1^2) u_n+1 - u_n-1_k+1^2 = ∫_t_n-1^t+1∂_t u(·,t) dt _r+1^2 = ∫_Ω∑_ℓ≤ r+1| D^ℓ( ∫_t_n-1^t+1∂_t u(x,t) dt ) |^2 dx = ∫_Ω∑_ℓ≤ r+1| ∫_t_n-1^t+1 D^ℓ∂_t u(x,t) dt |^2 dx ≤∫_Ω∑_ℓ≤ r+1( ∫_t_n-1^t+1| D^ℓ∂_t u(x,t) | dt )^2 dx ≤ ∫_Ω∑_ℓ≤ r+1[ ( ∫_t_n-1^t+1 1^2 dt ) ( ∫_t_n-1^t+1| D^ℓ∂_t u(x,t) |^2 dt ) ] dx = (k_n+k_n-1) ∫_Ω∫_t_n-1^t+1∑_ℓ≤ r+1| D^ℓ∂_t u(x,t) |^2 dt dx = (k_n+k_n-1) ∫_t_n-1^t+1∫_Ω∑_ℓ≤ r+1| D^ℓ∂_t u(x,t) |^2 dx dt = (k_n+k_n-1) ∫_t_n-1^t_n+1 u_t_r+1^2 dt. Similarly u_n+1 - u_n_r+1^2≤ k_n∫_t_n^t_n+1 u_t_r+1^2 dt ∑_ℓ=0^2α_ℓ u_n-1+ℓ_r+1^2≤ C(θ) ( u_n+1 - u_n_r+1^2 + u_n+1 - u_n-1_r+1^2) ≤ C(θ) (k_n+k_n-1) ∫_t_n-1^t_n+1 u_t_r+1^2 dt. u_n+1 - u_n_r+1^2 = ∫_t_n^t_n+1 u_t(·,t) dt _r+1^2≤ k_n∫_t_n^t_n+1 u_t_r+1^2 dt, u_n+1 - u_n-1_r+1^2 = ∫_t_n-1^t_n+1 u_t(·,t) dt _r+1^2≤ (k_n+k_n-1) ∫_t_n-1^t_n+1 u_t_r+1^2 dt. We combine (<ref>) and (<ref>), 1/k_n( ∑_ℓ=0^2α_ℓη_n-1+ℓ, ϕ_n,β^h) ≤C(θ) h^2r+2/νk_n∫_t_n-1^t_n+1 u_t_r+1^2 dt + ν/16∇ϕ_n,β^h^2. By the definition of the Stokes projection, ν( ∇η_n,β, ∇ϕ_n,β^h) = 0. We set η_n = β _2^(n)[(1 + k_n/k_n-1) η_n - k_n/k_n-1η_n-1] + β _1^(n)η_n + β _0^(n)η_n-1, ϕ_n^h = β _2^(n)[(1 + k_n/k_n-1) ϕ_n^h - k_n/k_n-1ϕ_n-1^h] + β _1^(n)ϕ_n^h + β _0^(n)ϕ_n-1^h. The non-linear terms in (<ref>) become b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) = b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β, ϕ_n,β^h) + b ( u_n^h, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) = b ( u_n - u_n^h, u_n,β, ϕ_n,β^h) + b ( u_n^h, u_n,β - u_n,β^h, ϕ_n,β^h) = b ( η_n - ϕ_n^h, u_n,β, ϕ_n,β^h) + b ( u_n^h, η_n,β - ϕ_n,β^h, ϕ_n,β^h) = b ( η_n, u_n,β, ϕ_n,β^h) - b ( ϕ_n^h, u_n,β, ϕ_n,β^h) + b ( u_n^h, η_n,β, ϕ_n,β^h) - b ( u_n^h, ϕ_n,β^h, ϕ_n,β^h) = b ( η_n, u_n,β, ϕ_n,β^h) - b ( ϕ_n^h, u_n,β, ϕ_n,β^h) + b ( u_n^h - u_n, η_n,β, ϕ_n,β^h) + b ( u_n, η_n,β, ϕ_n,β^h) = b ( η_n, u_n,β, ϕ_n,β^h) - b ( ϕ_n^h, u_n,β, ϕ_n,β^h) + b ( ϕ_n^h - η_n, η_n,β, ϕ_n,β^h) + b ( u_n, η_n,β, ϕ_n,β^h) b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) = b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β, ϕ_n,β^h) + b ( u_n^h, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) = b (η_n, u_n,β, ϕ_n,β^h) - b (ϕ_n^h, u_n,β, ϕ_n,β^h) + b (u_n^h, η_n,β, ϕ_n,β^h) = b (η_n, u_n,β, ϕ_n,β^h) - b (ϕ_n^h, u_n,β, ϕ_n,β^h) + b (ϕ_n^h, η_n,β, ϕ_n,β^h) - b (η_n, η_n,β, ϕ_n,β^h) + b (u_n, η_n,β, ϕ_n,β^h). By (<ref>), (<ref>), (<ref>) and (<ref>), Poincaré inequality and step requirement in (<ref>) b ( η_n, u_n,β, ϕ_n,β^h) ≤ C ∇η_n∇ u_n,β∇ϕ_n,β^h, b ( ϕ_n^h, u_n,β, ϕ_n,β^h) ≤ C(θ) u_n,β_2( ϕ_n^h + ϕ_n-1^h) ∇ϕ_n,β^h, b ( u_n, η_n,β, ϕ_n,β^h) ≤ C(θ) ( ∇ u_n + ∇ u_n-1) ∇η_n,β∇ϕ_n,β^h, b ( ϕ_n^h, η_n,β, ϕ_n,β^h) ≤ C ϕ_n^h^1/2∇ϕ_n^h^1/2∇η_n,β∇ϕ_n,β^h ≤ C h ϕ_n^h^1/2∇ϕ_n^h^1/2 u_n,β_2∇ϕ_n,β^h ≤ C(θ) h^1/2( ϕ_n^h + ϕ_n-1^h) u_n,β_2∇ϕ_n,β^h b ( η_n, η_n,β, ϕ_n,β^h) ≤ C(θ) ( ∇η_n + ∇η_n-1) ∇η_n,β∇ϕ_n,β^h ≤ C(θ) ( ∇ u_n + ∇ u_n-1) ∇η_n,β∇ϕ_n,β^h. We apply Young's inequality to all non-linear terms in (<ref>) b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) ≤ C(θ) ( u_n,β_2( ϕ_n^h + ϕ_n-1^h) + ∇η_n∇ u_n,β + ( ∇ u_n + ∇ u_n-1) ∇η_n,β) ∇ϕ_n,β^h ≤ C(θ)/ν( u_n,β_2^2( ϕ_n^h^2 + ϕ_n-1^h^2) + ∇η_n^2∇ u_n,β^2 + ( ∇ u_n^2 + ∇ u_n-1^2) ∇η_n,β^2) + ν/16∇ϕ_n,β^h^2 ≤ C(θ)/ν( u_n,β_2^2( ϕ_n^h^2 + ϕ_n-1^h^2) + | ∇ u | _∞,0^2∇η_n^2 + |∇ u| _∞,0^2∇η_n,β^2) + ν/16∇ϕ_n,β^h^2 b ( u_n, u_n,β, ϕ_n,β^h) - b ( u_n^h, u_n,β^h, ϕ_n,β^h) ≤ C(θ)/ν[ u_n,β_2^2(ϕ_n^h^2+ϕ_n-1^h^2) + | ∇ u | _∞,0^2∇η_n^2+ | ∇ u| _∞,0^2∇η_n,β^2] +ν/16∇ϕ_n,β^h^2. By (<ref>), triangle inequality and (<ref>) in Lemma <ref> ∇η_n^2≤ C h^2ru_n_r+1^2≤ C h^2r( u_n - u(t_n,β) _r+1^2 + u(t_n,β) _r+1^2) ≤ C(θ) h^2r (k_n+k_n-1)^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + C h^2r u(t_n,β) _r+1^2 ∇η_n,β^2≤ C h^2ru_n,β_r+1^2≤ C h^2r( u_n,β - u(t_n,β) _r+1^2 + u(t_n,β) _r+1^2) ≤ C(θ) h^2r (k_n+k_n-1)^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + C h^2r u(t_n,β) _r+1^2 ∇η_n^2≤ C h^2ru_n_r+1^2≤ C(θ) h^2r (k_n+k_n-1)^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + C h^2r u(t_n,β)_r+1^2, ∇η_n,β^2≤ C h^2ru_n,β_r+1^2≤ C(θ) h^2r (k_n+k_n-1)^3∫_t_n-1^t_n+1u_tt_r+1^2 dt + C h^2ru(t_n,β) _r+1^2. (<ref>) becomes b (u_n, u_n,β, ϕ_n,β^h) - b (u_n^h, u_n,β^h, ϕ_n,β^h) ≤ C(θ) u_n,β_2^2/ν(ϕ_n^h^2+ϕ_n-1^h^2) +ν/16∇ϕ_n,β^h^2 +C(θ) h^2r/ν | ∇ u | _∞,0^2( k_max^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + u(t_n,β) _r+1^2) We set q^h to be L^2 projection of p(t_n,β) onto Q^h in (<ref>) and use (<ref>) (p(t_n,β) - q^h, ∇·ϕ_n,β^h) ≤ √(d) p(t_n,β) - q^h∇ϕ_n,β^h≤Ch^2s+2/ν p(t_n,β) _s+1^2+ν/16∇ϕ_n,β^h^2 Now we treat τ_n(ϕ_n,β^h): by (<ref>) and (<ref>) in Lemma <ref> ( ∑_ℓ=0^2α_ℓ u_n-1+ℓ/k_n - u(t_n,β), ϕ_n,β^h) ≤ ∑_ℓ=0^2α_ℓ u_n-1+ℓ/k_n - u(t_n,β) _-1∇ϕ_n,β^h ≤ C(θ)/ν k_max^3∫_t_n-1^t_n+1 u_ttt_-1^2 dt + ν/16∇ϕ_n,β^h^2. ν( ∇ (u_n,β - u(t_n,β)), ∇ϕ_n,β^h) ≤ ν∇ (u_n,β - u(t_n,β)) ∇ϕ_n,β^h ≤ C(θ) ν k_max^3∫_t_n-1^t_n+1∇ u_tt^2 dt + ν/16∇ϕ_n,β^h^2. (f(t_n,β) - f_n,β,ϕ_n,β^h) ≤ f(t_n,β) - f_n,β_-1∇ϕ_n,β^h ≤ C(θ)/ν k_max^3∫_t_n-1^t_n+1 f_tt_-1^2 dt + ν/16∇ϕ_n,β^h^2. b ( u_n, u_n,β, ϕ_n,β^h) - b ( u(t_n,β), u(t_n,β), ϕ_n,β^h ) = b ( u_n, u_n,β, ϕ_n,β^h) - b ( u(t_n,β), u_n,β, ϕ_n,β^h) + b ( u(t_n,β), u_n,β, ϕ_n,β^h) - b ( u(t_n,β), u(t_n,β), ϕ_n,β^h ) By (<ref>) and (<ref>) in Lemma <ref> b ( u_n, u_n,β, ϕ_n,β^h) - b ( u(t_n,β), u(t_n,β), ϕ_n,β^h ) = b ( u_n - u(t_n,β), u_n,β, ϕ_n,β^h) + b ( u(t_n,β), u_n,β - u(t_n,β), ϕ_n,β^h) ≤ C ∇( u_n - u(t_n,β) ) ∇ u_n,β∇ϕ_n,β^h + C ∇ u(t_n,β) ∇( u_n,β - u(t_n,β) ) ∇ϕ_n,β^h ≤ C(θ)/ν k_max^3( |u| _∞,1^2 + |u| _∞,1,β^2) ∫_t_n-1^t_n+1∇ u_tt^2 dt + ν/16∇ϕ_n,β^h^2. C ∇( u_n - u(t_n,β) ) ∇ u_n,β∇ϕ_n,β^h≤ C ∇( u_n - u(t_n,β) ) |u| _∞,1∇ϕ_n,β^h ≤ ν/32∇ϕ_n,β^h^2 + C/ν∇( u_n - u(t_n,β) ) ^2 |u| _∞,1^2 ≤ ν/32∇ϕ_n,β^h^2 + C(θ)/ν (k_n + k_n-1)^3 |u| _∞,1^2∫_t_n-1^t_n+1∇ u_tt^2 dt ≤ ν/32∇ϕ_n,β^h^2 + C(θ)/ν (k_n + k_n-1)^3 |u| _∞,1^2∫_t_n-1^t_n+1∇ u_tt^2 dt C ∇ u(t_n,β) ∇( u_n,β - u(t_n,β) ) ∇ϕ_n,β^h≤ ν/32∇ϕ_n,β^h^2 + C/ν∇( u_n - u(t_n,β) ) ^2∇ u(t_n,β) ^2 ≤ ν/32∇ϕ_n,β^h^2 + C/ν |u| _∞,1,β^2∇( u_n - u(t_n,β) ) ^2 ≤ ν/32∇ϕ_n,β^h^2 + C(θ)/ν |u| _∞,1,β^2 (k_n+k_n-1)^3∫_t_n-1^t_n+1∇ u_tt^2 dt We combine (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and sum (<ref>) over n from 1 to N-1 ϕ_n+1^h ϕ_n^h _G(θ)^2 - ϕ_n^h ϕ_n-1^h _G(θ)^2 + ∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h ^2 + νk_n∇ϕ_n,β^h^2 ≤ C(θ) h^2r+2/ν∫_t_n-1^t_n+1 u_t_r+1^2 dt + νk_n/16∇ϕ_n,β^h^2 + C(θ)/νk_n u_n,β_2^2( ϕ_n^h^2 + ϕ_n-1^h^2) + νk_n/16∇ϕ_n,β^h^2 + C(θ) h^2r/ν | ∇ u | _∞,0^2( k_max^4∫_t_n-1^t_n+1 u_tt_r+1^2 dt + k_n u(t_n,β) _r+1^2) + Ch^2s+2/νk_n p(t_n,β) _s+1^2 + νk_n/16∇ϕ_n,β^h^2 + C(θ)/ν k_max^4∫_t_n-1^t_n+1 u_ttt_-1^2 dt + νk_n/16∇ϕ_n,β^h^2 + C(θ) ν k_max^4∫_t_n-1^t_n+1∇ u_tt^2 dt + νk_n/16∇ϕ_n,β^h^2 + C(θ)/ν k_max^4∫_t_n-1^t_n+1 f_tt_-1^2 dt + νk_n/16∇ϕ_n,β^h^2 + C(θ)/ν k_max^4( |u| _∞,1^2 + |u| _∞,1,β^2) ∫_t_n-1^t_n+1∇ u_tt^2 dt + νk_n/16∇ϕ_n,β^h^2 ϕ_N^h ϕ_N-1^h _G(θ)^2 - ϕ_1^h ϕ_0^h _G(θ)^2 + ∑_n=1^N-1∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h ^2 + ν/2∑_n=1^N-1k_n∇ϕ_n,β^h^2 ≤ C(θ) h^2r+2/ν∑_n=1^N-1∫_t_n-1^t_n+1 u_t_r+1^2 dt + C(θ)/ν∑_n=1^N-1k_n u_n,β_2^2( ϕ_n^h^2 + ϕ_n-1^h^2) + C(θ) h^2r/ν |∇ u| _∞,0^2( k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 u_tt_r+1^2 dt + ∑_n=1^N-1 (k_n + k_n-1) u(t_n,β) _r+1^2) + C(θ) h^2s+2/ν∑_n=1^N-1 (k_n + k_n-1) p(t_n,β) _s+1^2 + C(θ)/ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 u_ttt_-1^2 dt + C(θ) ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1∇ u_tt^2 dt + C(θ)/ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 f_tt_-1^2 dt + C(θ)/ν k_max^4( |u| _∞,1^2 + |u| _∞,1,β^2) ∑_n=1^N-1∫_t_n-1^t_n+1∇ u_tt^2 dt ϕ_N^h ϕ_N-1^h _G(θ)^2 - ϕ_1^h ϕ_0^h _G(θ)^2 + ∑_n=1^N-1∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h ^2 + ν/2∑_n=1^N-1k_n∇ϕ_n,β^h^2 ≤ C(θ) h^2r+2/ν∑_n=1^N-1∫_t_n-1^t_n+1 u_t_r+1^2 dt + C(θ)/ν∑_n=1^N-1k_n u_n,β_2^2( ϕ_n^h^2 + ϕ_n-1^h^2) + C(θ) h^2r/ν |∇ u| _∞,0^2( k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 u_tt_r+1^2 dt + ∑_n=1^N-1 (k_n + k_n-1) u(t_n,β) _r+1^2) + C(θ)h^2s+2/ν∑_n=1^N-1 (k_n + k_n-1) p(t_n,β) _s+1^2 + C(θ)k_max^4/ν∑_n=1^N-1∫_t_n-1^t_n+1 u_ttt_-1^2 dt + C(θ) ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1∇ u_tt^2 dt + C(θ) k_max^4/ν∑_n=1^N-1∫_t_n-1^t_n+1 f_tt_-1^2 dt + C(θ) k_max^4/ν( |u| _∞,1^2 + |u| _∞,1,β^2) ∑_n=1^N-1∫_t_n-1^t_n+1∇ u_tt^2dt. By the definition of the ·_G(θ)-norm in (<ref>), (<ref>) becomes ϕ_N^h^2 + C(θ) ν∑_n=1^N-1k_n∇ϕ_n,β^h^2 ≤ C(θ)/ν[k_N-1 u_N-1,β_2^2ϕ_N-1^h^2 +∑_n=1^N-2(k_n+1 u_n+1,β_2^2 +k_n u_n,β_2^2)ϕ_n^h^2+k_1 u_1,β_2^2ϕ_0^h^2] + C(θ) h^2r+2/ν u_t_2,r+1^2 + C(θ) h^2r/ν |∇ u| _∞,0^2( k_max^4 u_tt_2,r+1^2 + |u|_2,r+1,β^2) + C(θ)h^2s+2/ν |p| _2,s+1,β^2+C(θ)k_max^4/ν u_ttt_2,-1^2+ C(θ) ν k_max^4∇ u_tt_2,0^2+C(θ) k_max^4/ν f_tt_2,-1^2 + C(θ) k_max^4/ν( |u| _∞,1^2+ |u| _∞,1,β^2) ∇ u_tt_2,0^2 + C(θ) ( ϕ_1^h^2 + ϕ_0^h^2 ). By the discrete Grönwall inequality without restrictions (<cit.>), (<ref>) becomes The coefficient is ∑_n=1^N-1k_n u_n,β_2^2≤ ∑_n=1^N-1 (k_n + k_n-1) u_n,β - u(t_n,β) _2^2 + ∑_n=1^N-1 (k_n + k_n-1) u(t_n,β) _2^2 ≤ C(θ) ∑_n=1^M-1 (k_n + k_n-1)^4∫_t_n-1^t_n+1 u_tt_2^2 dt + ∑_n=1^M-1 (k_n + k_n-1) u(t_n,β) _2^2 ≤ C(θ) k_max^4 u_tt_2,2^2 + |u| _2,2,β^2 ϕ_N^h^2 + C(θ) ν∑_n=1^N-1k_n∇ϕ_n,β^h^2 ≤exp( C(θ)/ν∑_n=1^N-1k_n u_n,β_2^2) F_1, where F_1 = C(θ) h^2r+2/ν u_t_2,r+1^2 + C(θ) h^2r/ν |∇ u| _∞,0^2( k_max^4 u_tt_2,r+1^2 + |u|_2,r+1,β^2) +C(θ) h^2s+2/ν |p| _2,s+1,β^2+C(θ)k_max^4/ν u_ttt_2,-1^2+ C(θ) ν k_max^4∇ u_tt_2,0^2 +C(θ) k_max^4/ν f_tt_2,-1^2+C(θ) k_max^4/ν( |u| _∞,1^2+ |u| _∞,1,β^2) ∇ u_tt_2,0^2 + C(θ) ( ϕ_1^h^2 + ϕ_0^h^2 ). By triangle inequality and (<ref>) in Lemma <ref>, (<ref>) can be simplifed ϕ_N^h^2 + C(θ) ν∑_n=1^N-1k_n∇ϕ_n,β^h^2 ≤exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] F_1. By triangle inequality, (<ref>), (<ref>) in Lemma <ref> and (<ref>) max_0 ≤ n ≤ N e_n^u≤max_0 ≤ n ≤ Nϕ_n^h + max_0 ≤ n ≤ Nη_n ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + Ch^r |u| _∞,r. ( ν∑_n=1^N-1k_n∇ e_n,β^u^2)^1/2 ≤ ( 2 ν∑_n=1^N-1k_n∇ϕ_n,β^h^2)^1/2 + ( 2 ν∑_n=1^N-1k_n∇η_n,β^2)^1/2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + Ch^r( ν∑_n=1^N-1k_n u_n,β_r+1^2 )^1/2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + C√(ν) h^r( ∑_n=1^N-1k_n u_n,β - u(t_n,β) _r+1^2 + ∑_n=1^N-1k_n u(t_n,β) _r+1^2 )^1/2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + C√(ν) h^r( C(θ) ∑_n=1^M-1 (k_n+k_n-1)^4∫_t_n-1^t_n+1 u_tt_r+1^2 dt + ∑_n=1^M-1 (k_n+k_n-1) u(t_n,β) _r+1^2 )^1/2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + C(θ) √(ν) h^r( k_max^4 u_tt_2,r+1^2 + |u| _2,r+1,β^2 )^1/2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + C(θ) √(ν) h^r( k_max^2 u_tt_2,r+1 + |u| _2,r+1,β), max_0 ≤ n ≤ N e_n^u≤ max_0 ≤ n ≤ Nϕ_n^h + max_0 ≤ n ≤ Nη_n ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + Ch^r |u| _∞,r. (ν∑_n=1^N-1k_n∇ e_n,β^u^2)^1/2≤ ( 2 ν∑_n=1^N-1k_n∇ϕ_n,β^h^2)^1/2+( 2 ν∑_n=1^N-1k_n∇η_n,β^2)^1/2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + C(θ) √(ν) h^r( k_max^2 u_tt_2,r+1 + |u| _2,r+1,β). We combine (<ref>) and (<ref>) max_0 ≤n ≤ N e_n^u+(ν∑_n=1^N-1k_n∇ e_n,β^u^2)^1/2 ≤exp[C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + Ch^r |u| _∞,r+ C(θ) √(ν) h^r( k_max^2 u_tt_2,r+1+ |u| _2,r+1,β), which implies (<ref>). Suppose the velocity u ∈ X and the pressure p ∈ Q of the NSE in (<ref>) satisfy u ∈ℓ^∞(0,N;H^r) ∩ℓ^∞(0,N;H^2) ∩ℓ^∞,β(0,N;H^2) ∩ℓ^2,β(0,N;H^r+1∩ H^2), u_t∈ L^2(0,T;H^r+1), u_tt∈ L^2(0,T;H^r+1∩ H^2), u_ttt∈ L^2(0,T;X' ∩ L^2), p ∈ℓ^∞(0,N;H^s+1) ∩ℓ^2,β(0,N;H^s+1), p_t∈ L^2(0,T;H^s+1), p_tt∈ L^2(0,T;H^s+1∩ H^1), and body force f ∈ L^2(0,T;X'∩ L^2). Under the time step bounds in (<ref>) and the time-diameter condition k_max≤ h^1/4, the numerical solutions by the semi-implicit DLN algorithm in (<ref>) satisfy max_0 ≤ n ≤ M u_n - u_n^h_1≤𝒪( h^r,h^s+1, k_max^2), ∑_n=1^M-1k_n/ν∑_ℓ=0^2α_ℓ(u_n-1+ℓ^h- u_n-1+ℓ)/k_n^2≤𝒪( h^r,h^s+1, k_max^2). Let (P_S^(u) u_n, P_S^(p) p_n) be Stokes projection of (u_n,p_n) onto V^h× Q^h. We set ϕ_n^h = u_n^h - P_S^(u) u_n, η_n = u_n - P_S^(u) u_n, e_n^u = ϕ_n^h - η_n, e_n^u = u_n^h - u_n, ϕ_n,α^h=∑_ℓ=0^2α_ℓϕ_n-1+ℓ^h, η_n,α=∑_ℓ=0^2α_ℓη_n-1+ℓ, e_n,α^u=∑_ℓ=0^2α_ℓ(u_n-1+ℓ^h- u_n-1+ℓ) =η_n,α-ϕ_n,α^h. We let v^h = k_n^-1ϕ_n,α^h in (<ref>). By the G-stability identity in (<ref>), (<ref>) becomes k_n^-1ϕ_n,α^h^2 + ν/k_n(∇ϕ_n+1^h ∇ϕ_n^h _G(θ)^2 - ∇ϕ_n^h ∇ϕ_n-1^h _G(θ)^2 + ∇(∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h) ^2) = (k_n^-1∑_ℓ=0^2α_ℓη_n-1+ℓ, k_n^-1ϕ_n,α^h) + ν( ∇η_n,β, ∇k_n^-1ϕ_n,α^h) -( p_n,β, ∇·k_n^-1ϕ_n,α^h) -τ_n(k_n^-1ϕ_n,α^h) + b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) - b ( u_n^h, u_n,β^h, k_n^-1ϕ_n,α^h) + (p_n,β - p(t_n,β), ∇·k_n^-1ϕ_n,α^h). By Cauchy Schwarz inequality, Young's inequality, (<ref>), (<ref>) and Hölder's inequality ( k_n^-1∑_ℓ=0^2α_ℓη_n-1+ℓ, k_n^-1ϕ_n,α^h) ≤k_n^-1∑_ℓ=0^2α_ℓη_n-1+ℓk_n^-1ϕ_n,α^h ≤ C/k_n^2∑_ℓ=0^2α_ℓη_n-1+ℓ^2 + 1/16k_n^-1ϕ_n,α^h^2 ≤ C/k_n^2( h^2r+2∑_ℓ=0^2α_ℓ u_n-1+ℓ_r+1^2 + h^2s+4/ν^2∑_ℓ=0^2α_ℓ p_n-1+ℓ_s+1^2) + 1/16k_n^-1ϕ_n,α^h^2 ≤ C/k_n^2( h^2r+2 C(θ) (k_n+k_n-1) ∫_t_n-1^t_n+1 u_t_r+1^2 dt + h^2s+4/ν^2 C(θ) (k_n+k_n-1) ∫_t_n-1^t_n+1 p_t_s+1^2 dt ) + 1/16k_n^-1ϕ_n,α^h^2 ( k_n^-1∑_ℓ=0^2α_ℓη_n-1+ℓ, k_n^-1ϕ_n,α^h) ≤ C/k_n^2( h^2r+2∑_ℓ=0^2α_ℓ u_n-1+ℓ_r+1^2 + h^2s+4/ν^2∑_ℓ=0^2α_ℓ p_n-1+ℓ_s+1^2) + 1/16k_n^-1ϕ_n,α^h^2 ≤ C(θ)/k_n( h^2r+2∫_t_n-1^t_n+1 u_t_r+1^2 dt + h^2s+4/ν^2∫_t_n-1^t_n+1 p_t_s+1^2 dt ) + 1/16k_n^-1ϕ_n,α^h^2. By the definition of Stokes projection and the fact that k_n^-1ϕ_n,α^h∈ V^h⊂ X^h ν( ∇η_n,β, ∇k_n^-1ϕ_n,α^h) -( p_n,β, ∇·k_n^-1ϕ_n,α^h) = ν( ∇η_n,β, ∇k_n^-1ϕ_n,α^h) -( p_n,β - P_S^(p)p_n,β , ∇·k_n^-1ϕ_n,α^h) = 0. ν( ∇η_n,β, ∇k_n^-1ϕ_n,α^h) -( p_n,β, ∇·k_n^-1ϕ_n,α^h) = 0. For τ(k_n^-1ϕ_n,α^h): ( ∑_ℓ=0^2α_ℓ u_n-1+ℓ/k_n- u(t_n,β), k_n^-1ϕ_n,α^h) ≤ ∑_ℓ=0^2α_ℓ u_n-1+ℓ/k_n- u(t_n,β) k_n^-1ϕ_n,α^h ≤ C(θ) k_max^3∫_t_n-1^t_n+1 u_ttt^2 dt + 1/16k_n^-1ϕ_n,α^h^2. ν( ∇ (u_n,β - u(t_n,β)), ∇k_n^-1ϕ_n,α^h) = ν( Δ (u_n,β - u(t_n,β)), k_n^-1ϕ_n,α^h) ≤ C(θ) ν^2 k_max^3∫_t_n-1^t_n+1 u_tt_2^2 dt + 1/16k_n^-1ϕ_n,α^h^2. (f(t_n,β) - f_n,β,k_n^-1ϕ_n,α^h) ≤ f(t_n,β) - f_n,βk_n^-1ϕ_n,α^h ≤ C(θ) k_max^3∫_t_n-1^t_n+1 f_tt^2 dt + 1/16k_n^-1ϕ_n,α^h^2. b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) - b ( u(t_n,β), u(t_n,β), k_n^-1ϕ_n,α^h ) = b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) - b ( u(t_n,β), u_n,β, k_n^-1ϕ_n,α^h) + b ( u(t_n,β), u_n,β, k_n^-1ϕ_n,α^h) - b ( u(t_n,β), u(t_n,β), k_n^-1ϕ_n,α^h ) By (<ref>), (<ref>) and (<ref>) b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) - b ( u(t_n,β), u(t_n,β), k_n^-1ϕ_n,α^h ) = b ( u_n - u(t_n,β), u_n,β, k_n^-1ϕ_n,α^h) + b ( u(t_n,β), u_n,β - u(t_n,β), k_n^-1ϕ_n,α^h) ≤ C u_n - u(t_n,β) _1 u_n,β_2k_n^-1ϕ_n,α^h + C u(t_n,β) _2 u_n,β - u(t_n,β) _1k_n^-1ϕ_n,α^h ≤ C(θ) k_max^3( |u| _∞,2^2 + |u| _∞,2,β^2) ∫_t_n-1^t_n+1 u_tt_1^2 dt + 1/16k_n^-1ϕ_n,α^h^2. b ( u_n^h, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) = b ( u_n^h, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( u_n, u_n,β^h, k_n^-1ϕ_n,α^h) + b ( u_n, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) = b ( e_n, u_n,β^h, k_n^-1ϕ_n,α^h) + b ( u_n, e_n,β, k_n^-1ϕ_n,α^h) = b ( e_n, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( e_n, u_n,β, k_n^-1ϕ_n,α^h) + b ( e_n, u_n,β, k_n^-1ϕ_n,α^h) + b ( u_n, e_n,β, k_n^-1ϕ_n,α^h) = b ( e_n, u_n,β, k_n^-1ϕ_n,α^h) + b ( u_n, e_n,β, k_n^-1ϕ_n,α^h) + b ( e_n, e_n,β, k_n^-1ϕ_n,α^h) For non-linear terms b ( u_n^h, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) = b ( e_n^u, u_n,β, k_n^-1ϕ_n,α^h) + b ( u_n, e_n,β^u, k_n^-1ϕ_n,α^h) + b ( e_n^u, e_n,β^u, k_n^-1ϕ_n,α^h). By (<ref>), (<ref>) and inverse inequality in (<ref>) b ( e_n^u, u_n,β, k_n^-1ϕ_n,α^h) = b ( ϕ_n^h, u_n,β, k_n^-1ϕ_n,α^h) - b ( η_n, u_n,β, k_n^-1ϕ_n,α^h) ≤ C ϕ_n^h_1 u_n,β_2k_n^-1ϕ_n,α^h + C η_n_1 u_n,β_2k_n^-1ϕ_n,α^h, b ( u_n, e_n,β^u, k_n^-1ϕ_n,α^h) ≤ C u_n_2 e_n,β^u_1k_n^-1ϕ_n,α^h, b ( e_n, e_n,β^u, k_n^-1ϕ_n,α^h) = b ( ϕ_n^h, e_n,β^u, k_n^-1ϕ_n,α^h) - b ( η_n, e_n,β^u, k_n^-1ϕ_n,α^h) ≤ C h^-1/2ϕ_n^h_1 e_n,β^u_1k_n^-1ϕ_n,α^h + C h^-1/2η_n_1 e_n,β^u_1k_n^-1ϕ_n,α^h. Thus b ( u_n^h, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) ≤ C ( u_n,β_2 + h^-1/2 e_n,β^u_1) ϕ_n^h_1k_n^-1ϕ_n,α^h + C η_n_1 u_n,β_2k_n^-1ϕ_n,α^h + C u_n_2 e_n,β^u_1k_n^-1ϕ_n,α^h + C h^-1/2η_n_1 e_n,β^u_1k_n^-1ϕ_n,α^h. By Cauchy-Schwarz inequality, Young's inequality, Poincaré inequality, (<ref>), (<ref>) and (<ref>) C ( u_n,β_2 + h^-1/2 e_n,β^u_1) ϕ_n^h_1k_n^-1ϕ_n,α^h ≤ C(θ) ( u_n,β_2+ h^-1/2 e_n,β^u_1) ( ∇ϕ_n^h + ∇ϕ_n-1^h) + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) ( u_n,β_2^2 + h^-1 e_n,β_1^2) ( ∇ϕ_n^h^2 + ∇ϕ_n-1^h^2) + 1/64k_n^-1ϕ_n,α^h^2, C η_n_1 u_n,β_2k_n^-1ϕ_n,α^h ≤ C(θ) |u| _∞,2^2( h^2s+2/ν^2p_n_s+1^2 + h^2ru_n_r+1^2) + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) |u| _∞,2^2[ h^2s+2/ν^2( p_n - p(t_n,β) _s+1^2 + p(t_n,β)_s+1^2) + h^2r( u_n - u(t_n,β) _r+1^2 + u(t_n,β) _r+1^2) ] + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) |u| _∞,2^2[ h^2s+2/ν^2( C(θ) k_max^3∫_t_n-1^t_n+1 p_tt_s+1^2 dt + p(t_n,β)_s+1^2) + h^2r( C(θ) k_max^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + u(t_n,β)_r+1^2) ] + 1/64k_n^-1ϕ_n,α^h^2, C u_n_2 e_n,β^u_1k_n^-1ϕ_n,α^h≤ C u_n_2^2 e_n,β^u_1^2 + 1/64k_n^-1ϕ_n,α^h^2 ≤ C |u| _∞,2^2 e_n,β^u_1^2 + 1/64k_n^-1ϕ_n,α^h^2, C h^-1/2η_n_1 e_n,β^u_1k_n^-1ϕ_n,α^h ≤ C h^-1η_n_1^2 e_n,β^u_1^2 + 1/64k_n^-1ϕ_n,α^h^2 ≤ C ( h^2s+1/ν^2 |p| _∞,s+1^2 + h^2r-1 |u| _∞,r+1^2) e_n,β^u_1^2 + 1/64k_n^-1ϕ_n,α^h^2. C ( ϕ_n^h_1 u_n,β_2 + h^-1/2ϕ_n^h_1 e_n,β_1) k_n^-1ϕ_n,α^h ≤ C ( ϕ_n^h_1^2 u_n,β_2^2 + h^-1ϕ_n^h_1^2 e_n,β_1^2) + 1/64k_n^-1ϕ_n,α^h^2 ≤ C ( ∇ϕ_n^h^2 u_n,β_2^2 + h^-1∇ϕ_n^h^2 e_n,β_1^2) + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) ( u_n,β_2^2 + h^-1 e_n,β_1^2) ( ∇ϕ_n^h^2 + ∇ϕ_n-1^h^2) + 1/64k_n^-1ϕ_n,α^h^2. C η_n_1 u_n,β_2k_n^-1ϕ_n,α^h≤ C η_n_1^2 u_n,β_2^2 + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) |u| _∞,2^2( h^2s+2/ν^2p_n_s+1^2 + h^2ru_n_r+1^2) + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) |u| _∞,2^2[ h^2s+2/ν^2( p_n - p(t_n,β) _s+1^2 + p(t_n,β)_s+1^2) + h^2r( u_n - u(t_n,β) _r+1^2 + u(t_n,β) _r+1^2) ] + 1/64k_n^-1ϕ_n,α^h^2 ≤ C(θ) |u| _∞,2^2[ h^2s+2/ν^2( C(θ) k_max^3∫_t_n-1^t_n+1 p_tt_s+1^2 dt + p(t_n,β)_s+1^2) + h^2r( C(θ) k_max^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + u(t_n,β)_r+1^2) ] + 1/64k_n^-1ϕ_n,α^h^2 C u_n_2 e_n,β_1k_n^-1ϕ_n,α^h≤ C u_n_2^2 e_n,β_1^2 + 1/64k_n^-1ϕ_n,α^h^2 ≤ C |u| _∞,2^2 e_n,β_1^2 + 1/64k_n^-1ϕ_n,α^h^2. C h^-1/2η_n_1 e_n,β_1k_n^-1ϕ_n,α^h≤ C h^-1η_n_1^2 e_n,β_1^2 + 1/64k_n^-1ϕ_n,α^h^2 ≤ C h^-1( h^2s+2/ν^2p_n_s+1^2 + h^2ru_n_r+1^2) e_n,β_1^2 + 1/64k_n^-1ϕ_n,α^h^2 ≤ C ( h^2s+1/ν^2 |p| _∞,s+1^2 + h^2r-1 |u| _∞,r+1^2) e_n,β_1^2 + 1/64k_n^-1ϕ_n,α^h^2. By (<ref>), (<ref>), (<ref>) and (<ref>), (<ref>) becomes b ( u_n^h, u_n,β^h, k_n^-1ϕ_n,α^h) - b ( u_n, u_n,β, k_n^-1ϕ_n,α^h) ≤ C(θ) ( u_n,β_2^2 + h^-1 e_n,β^u_1^2) ( ∇ϕ_n^h^2 + ∇ϕ_n-1^h^2) + C |u| _∞,2^2 e_n,β^u_1^2 + C ( h^2s+1/ν^2 |p| _∞,s+1^2 + h^2r-1 |u| _∞,r+1^2) e_n,β_1^2 + 1/16k_n^-1ϕ_n,α^h^2 + C(θ) |u| _∞,2^2[ h^2s+2/ν^2( k_max^3∫_t_n-1^t_n+1 p_tt_s+1^2 dt + p(t_n,β)_s+1^2) + h^2r( k_max^3∫_t_n-1^t_n+1 u_tt_r+1^2 dt + u(t_n,β)_r+1^2) ]. We use integration by parts and (<ref>) in Lemma <ref> (p_n,β - p(t_n,β), ∇·k_n^-1ϕ_n,α^h) = - (∇ (p_n,β - p(t_n,β)), k_n^-1ϕ_n,α^h) ≤ C ∇ (p_n,β - p(t_n,β)) ^2 + 1/16k_n^-1ϕ_n,α^h^2 ≤ C(θ) (k_n + k_n-1)^3 ∫_t_n-1^t_n+1∇ p_tt^2 dt + 1/16k_n^-1ϕ_n,α^h^2. (p_n,β - p(t_n,β), ∇·k_n^-1ϕ_n,α^h) ≤ C ∇ (p_n,β - p(t_n,β)) ^2 + 1/16k_n^-1ϕ_n,α^h^2 ≤ C(θ) k_max^3 ∫_t_n-1^t_n+1∇ p_tt^2 dt + 1/16k_n^-1ϕ_n,α^h^2. We combine (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and sum (<ref>) over n from 1 to N-1 ν(∇ϕ_n+1^h ∇ϕ_n^h _G(θ)^2 - ∇ϕ_n^h ∇ϕ_n-1^h _G(θ)^2 + ∇(∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h) ^2) + k_n/2k_n^-1ϕ_n,α^h^2 ≤ C(θ) ( k_n u_n,β_2^2 + 1/h ννk_n e_n,β_1^2) ( ∇ϕ_n^h^2 + ∇ϕ_n-1^h^2) + C/ν |u| _∞,2^2νk_n e_n,β_1^2 + C ( h^2s+1/ν^3 |p| _∞,s+1^2 + h^2r-1/ν |u| _∞,r+1^2) νk_n e_n,β_1^2 + C(θ) |u| _∞,2^2[ h^2s+2/ν^2( k_max^4∫_t_n-1^t_n+1 p_tt_s+1^2 dt + (k_n + k_n-1) p(t_n,β)_s+1^2) + h^2r( k_max^4∫_t_n-1^t_n+1 u_tt_r+1^2 dt + (k_n + k_n-1) u(t_n,β)_r+1^2) ] + C(θ) ( h^2r+2∫_t_n-1^t_n+1 u_t_r+1^2 dt + h^2s+4/ν^2∫_t_n-1^t_n+1 p_t_s+1^2 dt ) + C(θ) k_max^4∫_t_n-1^t_n+1 u_ttt^2 dt + C(θ) ν^2 k_max^4∫_t_n-1^t_n+1 u_tt_2^2 dt + C(θ) k_max^4∫_t_n-1^t_n+1 f_tt^2 dt + C(θ) k_max^4 |u| _∞,2^2∫_t_n-1^t_n+1 u_tt_1^2 dt + C(θ) k_max^4∫_t_n-1^t_n+1∇ p_tt^2 dt. ∇ϕ_N^h ∇ϕ_N-1^h _G(θ)^2 - ∇ϕ_1^h ∇ϕ_0^h _G(θ)^2 + ∑_n=1^N-1∇(∑_ℓ=0^2γ_ℓ^(n)ϕ_n-1+ℓ^h) ^2 + ∑_n=1^N-1k_n/2νk_n^-1ϕ_n,α^h^2 ≤ C(θ)/ν∑_n=1^N-1( k_n u_n,β_2^2 + 1/h ννk_n e_n,β_1^2) ( ∇ϕ_n^h^2 + ∇ϕ_n-1^h^2) + C/ν^2 |u| _∞,2^2∑_n=1^N-1νk_n e_n,β_1^2 + C ( h^2s+1/ν^4 |p| _∞,s+1^2 + h^2r-1/ν^2 |u| _∞,r+1^2) ∑_n=1^N-1νk_n e_n,β_1^2 + C(θ) |u| _∞,2^2[ h^2s+2/ν^3( k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 p_tt_s+1^2 dt + ∑_n=1^N-1 (k_n + k_n-1) p(t_n,β)_s+1^2) + h^2r/ν( k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 u_tt_r+1^2 dt + ∑_n=1^N-1 (k_n + k_n-1) u(t_n,β)_r+1^2) ] + C(θ) ( h^2r+2/ν∑_n=1^N-1∫_t_n-1^t_n+1 u_t_r+1^2 dt + h^2s+4/ν^3∑_n=1^N-1∫_t_n-1^t_n+1 p_t_s+1^2 dt ) + C(θ)/ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 u_ttt^2 dt + C(θ) ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 u_tt_2^2 dt + C(θ)/ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 f_tt^2 dt + C(θ)/ν k_max^4 |u| _∞,2^2∑_n=1^N-1∫_t_n-1^t_n+1 u_tt_1^2 dt + C(θ)/ν k_max^4∑_n=1^N-1∫_t_n-1^t_n+1∇ p_tt^2 dt. ∇ϕ_N^h^2 + C(θ)/ν∑_n=1^N-1k_nk_n^-1ϕ_n,α^h^2 ≤ C(θ)/ν∑_n=1^N-1( k_n u_n,β_2^2 + 1/h ννk_n e_n,β_1^2) ( ∇ϕ_n^h^2 + ∇ϕ_n-1^h^2) + C(θ)/ν^2 |u| _∞,2^2∑_n=1^N-1νk_n e_n,β_1^2 + C(θ) ( h^2s+1/ν^4 |p| _∞,s+1^2 + h^2r-1/ν^2 |u| _∞,r+1^2) ∑_n=1^N-1νk_n e_n,β_1^2 + C(θ) |u| _∞,2^2[h^2s+2/ν^3( k_max^4 p_tt_2,s+1^2+ |p| _2,s+1,β^2) +h^2r/ν( k_max^4 u_tt_2,r+1^2+ |u| _2,r+1,β^2) ] + C(θ) ( h^2r+2/ν u_t_2,r+1^2 + h^2s+4/ν^3 p_t_2,s+1^2) + C(θ)/ν k_max^4 u_ttt_2,0^2 + C(θ) ν k_max^4 u_tt_2,2^2 + C(θ)/ν k_max^4 f_tt_2,0^2 + C(θ)/ν k_max^4 |u| _∞,2^2 u_tt_2,1^2 + C(θ)/ν k_max^4∇ p_tt_2,0^2 + C(θ) ( ∇ϕ_1^h^2 + ∇ϕ_0^h^2). Since ∑_n=1^N-1νk_n e_n,β^u_1^2 = ν∑_n=1^N-1k_n e_n,β^u^2 + ν∑_n=1^N-1k_n∇ e_n,β^u^2 ≤ C(θ) ν T max_0 ≤ n ≤ M e_n^u^2 + ν∑_n=1^N-1k_n∇ e_n,β^u^2, we use (<ref>) in the proof of Theorem <ref> to obtain ∑_n=1^N-1νk_n e_n,β^u_1^2≤ ( C(θ) ν T + 1 ) F_2^2, where F_2 = exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(F_1) + Ch^r |u| _∞,r + C(θ) √(ν) h^r( k_max^2 u_tt_2,r+1 + |u| _2,r+1,β), and F_1 is in (<ref>). We apply (<ref>) and discrete Gronwall inequality to (<ref>) ∇ϕ_N^h^2 + C(θ)/ν∑_n=1^N-1k_nk_n^-1ϕ_n,α^h^2 ≤ C(θ)/ν[ ( k_N-1 u_N-1,β_2^2 + 1/h ννk_N-1 e_N-1,β^u_1^2) ∇ϕ_N-1^h^2 + ∑_n=2^N-2(k_n+1 u_n+1,β_2^2 + k_n u_n,β_2^2 + 1/h ννk_n+1 e_n+1,β^u_1^2 + 1/h ννk_n e_n,β^u_1^2) ∇ϕ_n^h^2 + ( k_1 u_1,β_2^2 + 1/h ννk_1 e_1,β^u_1^2) ∇ϕ_0^h^2] + C(θ)(1 + ν T)/ν^2 |u| _∞,2^2 F_2^2 + C(θ)(1 + ν T) ( h^2s+1/ν^4 |p| _∞,s+1^2 + h^2r-1/ν^2 |u| _∞,r+1^2) F_2^2 + C(θ) |u| _∞,2^2[h^2s+2/ν^3( k_max^4 p_tt_2,s+1^2+ |p| _2,s+1,β^2) +h^2r/ν( k_max^4 u_tt_2,r+1^2+ |u| _2,r+1,β^2) ] + C(θ) ( h^2r+2/ν u_t_2,r+1^2 + h^2s+4/ν^3 p_t_2,s+1^2) + C(θ)/ν k_max^4 u_ttt_2,0^2 + C(θ) ν k_max^4 u_tt_2,2^2 + C(θ)/ν k_max^4 f_tt_2,0^2 + C(θ)/ν k_max^4 |u| _∞,2^2 u_tt_2,1^2 + C(θ)/ν k_max^4∇ p_tt_2,0^2 + C(θ) ( ∇ϕ_1^h^2 + ∇ϕ_0^h^2). ∇ϕ_N^h^2 + C(θ)/ν∑_n=1^N-1k_nk_n^-1ϕ_n,α^h^2 ≤ exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u|_2,2,β^2 + C(θ) ν T + 1 /h ν F_2^2) ] F_3, where F_3 = C(θ)(1 +ν T)/ν^2 |u| _∞,2^2 F_2^2+ C(θ)(1 +ν T) ( h^2s+1/ν^4 |p| _∞,s+1^2 + h^2r-1/ν^2 |u| _∞,r+1^2) F_2^2 + C(θ) |u| _∞,2^2[h^2s+2/ν^3( k_max^4 p_tt_2,s+1^2+ |p| _2,s+1,β^2) +h^2r/ν( k_max^4 u_tt_2,r+1^2+ |u| _2,r+1,β^2) ] + C(θ) ( h^2r+2/ν u_t_2,r+1^2 + h^2s+4/ν^3 p_t_2,s+1^2) + C(θ)/ν k_max^4 u_ttt_2,0^2 + C(θ) ν k_max^4 u_tt_2,2^2 + C(θ)/ν k_max^4 f_tt_2,0^2 + C(θ)/ν k_max^4 |u| _∞,2^2 u_tt_2,1^2 + C(θ)/ν k_max^4∇ p_tt_2,0^2 + C(θ) ( ∇ϕ_1^h^2 + ∇ϕ_0^h^2). By the time-diameter condition in (<ref>), h^-1 F_2^2 is bounded. Thus max_0 ≤ n ≤ N∇ e_n^u≤ max_0 ≤ n ≤ N∇η_n + max_0 ≤ n ≤ N∇ϕ_n^h ≤ Ch^r |u| _∞,r+1+Ch^s+1/ν |p| _∞,s+1 + exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u|_2,2,β^2 + C(θ) ν T + 1 /h ν F_2^2) ] √(F_3). Combining Theorem <ref> and (<ref>), we have (<ref>). By (<ref>), (<ref>) and Hölder's inequality ∑_n=1^N-1k_n/νk_n^-1η_n,α^2 = ∑_n=1^N-11/νk_nη_n,α^2≤∑_n=1^N-1C/νk_n( h^2s+2/ν^2 p_n,α_s+1^2 + h^2r u_n,α_r^2) ≤ ∑_n=1^N-1C/νk_n[ h^2s+2/ν^2 C(θ)(k_n + k_n-1) ∫_t_n-1^t_n+1 p_t_s+1^2 dt + h^2r C(θ)(k_n + k_n-1) ∫_t_n-1^t_n+1 u_t_r^2 dt ] ≤ ∑_n=1^N-1C(θ)/ν[ h^2s+2/ν^2∫_t_n-1^t_n+1 p_t_s+1^2 dt + h^2r∫_t_n-1^t_n+1 u_t_r^2 dt ] = C(θ)/ν( h^2s+2/ν^2 p_t_2,s+1^2 + h^2r u_t_2,r^2) ∑_n=1^N-1k_n/νk_n^-1 e_n,α^2≤ 2 ∑_n=1^N-1k_n/νk_n^-1η_n,α^2 + 2 ∑_n=1^N-1k_n/νk_n^-1ϕ_n,α^h^2 ≤ C(θ)/ν( h^2s+2/ν^2 p_t_2,s+1^2 + h^2r u_t_2,r^2) + exp[ C(θ)/ν( k_max^4 u_tt_2,2^2+ |u|_2,2,β^2+ C(θ) ν T + 1 /h ν F_2^2) ] · F_3 ∑_n=1^N-1k_n/νk_n^-1η_n,α^2≤ ∑_n=1^N-1C/νk_n( h^2s+2/ν^2∑_ℓ=0^2α_ℓ p_n-1+ℓ_s+1^2 + h^2r∑_ℓ=0^2α_ℓ u_n-1+ℓ_r^2) ≤ ∑_n=1^N-1C(θ)/ν[ h^2s+2/ν^2∫_t_n-1^t_n+1 p_t_s+1^2 dt + h^2r∫_t_n-1^t_n+1 u_t_r^2 dt ] = C(θ)/ν( h^2s+2/ν^2 p_t_2,s+1^2 + h^2r u_t_2,r^2). By (<ref>) and (<ref>) and triangle inequality ∑_n=1^N-1k_n/νk_n^-1 e_n,α^2≤ C(θ)/ν(h^2s+2/ν^2 p_t_2,s+1^2+ h^2r u_t_2,r^2) + exp[C(θ)/ν( k_max^4 u_tt_2,2^2+ |u|_2,2,β^2+ C(θ) ν T + 1 /h ν F_2^2) ] F_3, which implies (<ref>). Suppose the velocity u ∈ X and pressure p ∈ Q of the NSE in (<ref>) satisfy u ∈ℓ^∞(0,N;H^r+1) ∩ℓ^∞(0,N;H^2) ∩ℓ^∞,β(0,N;H^1) ∩ℓ^2,β(0,N;H^r+1∩ H^2), u_t∈ L^2(0,T;H^r+1), u_tt∈ L^2(0,T;H^r+1∩ H^2), u_ttt∈ L^2(0,T;X' ∩ L^2), p ∈ℓ^∞(0,N;H^s+1) ∩ℓ^2,β(0,N;H^s+1), p_t∈ L^2(0,T;H^s+1), p_tt∈ L^2(0,T;H^s+1∩ H^1), and the body force f ∈ L^2(0,T;X'∩ L^2), then under the time step bounds in (<ref>) and the time-diameter condition in (<ref>), the pressure component by the algorithm in (<ref>) satisfy ( ∑_n=1^N-1k_n p_n,β - p_n,β^h^2)^1/2≤𝒪( h^r,h^s+1, k_max^2). Moreover for constant time-stepping DLN algorithm with parameter θ∈ (0,1) and constant time step k, we have ∑_n=0^N k p_n - p_n^h≤𝒪( h^r,h^s+1, k^2). Let (P_S^(u) u_n, P_S^(p) p_n) be Stokes projection of (u_n,p_n) onto V^h× Q^h. We set ϕ_n^h = u_n^h - P_S^(u) u_n, η_n = u_n - P_S^(u) u_n, e_n^u = ϕ_n^h - η_n e_n,α^u = ∑_ℓ=0^2α_ℓ e_n-1+ℓ^u, e_n^u = u_n^h - u_n. We let v^h∈ X^h in (<ref>) and subtract (<ref>) from the first equation of (<ref>) 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ^h, v^h) - 1/k_n( ∑_ℓ=0^2α_ℓ u_n-1+ℓ, v^h) + b ( u_n^h, u_n,β^h, v^h) - b ( u_n, u_n,β, v^h) + ν( ∇ u_n,β^h, ∇ v^h) - ν( ∇ u_n,β, ∇ v^h) = (p_n,β^h - p_n,β + p_n,β -p(t_n,β), ∇· v^h) - τ_n(v^h) ( k_n^-1 e_n,α^u, v^h) + ν( ∇ e_n,β^u, ∇ v^h) + b ( u_n^h, u_n,β^h, v^h) - b ( u_n, u_n,β, v^h) = - (p(t_n,β) - p_n,β + p_n,β -q_n^h + q_n^h - p_n,β^h , ∇· v^h) - τ_n(v^h), ( p_n,β^h - q_n^h, ∇· v^h ) = ( k_n^-1 e_n,α^u , v^h) + ν( ∇ e_n,β^u, ∇ v^h) + b ( u_n^h, u_n,β^h, v^h) - b ( u_n, u_n,β, v^h) + (p(t_n,β) - p_n,β, ∇· v^h) + ( p_n,β - q_n^h, ∇· v^h ) + τ_n(v^h), where q^h is the L^2 projection of p_n,β onto Q^h. By (<ref>) and Poincaré inequality b ( u_n^h, u_n,β^h, v^h) - b ( u_n, u_n,β, v^h) = b ( e_n^u, u_n,β, v^h) + b ( u_n, e_n,β^u, v^h) + b ( e_n^u, e_n,β^u, v^h) ≤ C ( e_n^u_1∇ u_n,β +u_n_1∇ e_n,β^u+e_n^u_1∇ e_n,β^u) ∇ v^h. By Cauchy-Schwarz inequality, (<ref>) and (<ref>) in Lemma <ref> ( ∑_ℓ=0^2α_ℓ u_n-1+ℓ/k_n - u(t_n,β), v^h) ≤ ∑_ℓ=0^2α_ℓ u_n-1+ℓ/k_n - u(t_n,β) _-1∇ v^h ≤ C(θ) ( k_max^3∫_t_n-1^t_n+1 u_ttt_-1^2 dt )^1/2∇ v^h, ν( ∇ (u_n,β - u(t_n,β)), ∇ v^h) ≤ ν∇ (u_n,β - u(t_n,β)) ∇ v^h ≤ C(θ)ν( k_max^3∫_t_n-1^t_n+1∇ u_tt dt )^1/2∇ v^h, (f(t_n,β) - f_n,β,v^h) ≤ f(t_n,β) - f_n,β_-1∇ v^h≤ C(θ) ( k_max^3∫_t_n-1^t_n+1 f_tt_-1 dt )^1/2∇ v^h By (<ref>), (<ref>) in Lemma <ref> and Poincaré inequality b ( u_n, u_n,β, v^h) - b ( u(t_n,β), u(t_n,β), v^h ) = b ( u_n - u(t_n,β), u_n,β, v^h) + b ( u(t_n,β), u_n,β - u(t_n,β), v^h) ≤ C ∇( u_n - u(t_n,β) ) ∇ u_n,β∇ v^h + C ∇ u(t_n,β) ∇( u_n,β - u(t_n,β) ) ∇ v^h ≤ C(θ) ( |u| _∞,1 + |u| _∞,1,β) ( k_max^3∫_t_n-1^t_n+1∇ u_tt dt )^1/2∇ v^h. We combine (<ref>), (<ref>), (<ref>) and (<ref>) τ_n(v^h) ≤ C(θ) k_max^3/2[ (∫_t_n-1^t_n+1 u_ttt_-1^2 dt )^1/2 + ( ∫_t_n-1^t_n+1∇ u_tt dt )^1/2 + ( ∫_t_n-1^t_n+1 f_tt_-1 dt )^1/2 + ( |u| _∞,1 + |u| _∞,1,β) ( ∫_t_n-1^t_n+1∇ u_tt dt )^1/2] ∇ v^h. By (<ref>) and (<ref>) (p(t_n,β) - p_n,β, ∇· v^h) ≤ √(d) p(t_n,β) - p_n,β∇ v^h≤ C(θ) ( k_max^3∫_t_n-1^t_n+1 p_tt dt )^1/2∇ v^h, ( p_n,β - q_n^h, ∇· v^h ) ≤ √(d) p_n,β - q^h∇ v^h≤Ch^s+1/ν p_n,β_s+1∇ v^h. By (<ref>),(<ref>), (<ref>) and (<ref>) p_n,β^h - q_n^h≤ k_n^-1 e_n,α^u+ν∇ e_n,β^u+ C(θ) | u| _∞,1 |e_n^u| _∞,1+ C(θ) |e_n^u| _∞,1^2 + C (θ) k_max^3/2[( ∫_t_n-1^t_n+1 u_ttt_-1^2 dt )^1/2+(∫_t_n-1^t_n+1∇ u_tt^2 dt )^1/2+(∫_t_n-1^t_n+1 f_tt_-1^2 dt )^1/2 + ( |u| _∞,1 + |u| _∞,1,β) ( ∫_t_n-1^t_n+1∇ u_tt^2 dt )^1/2] + C (θ) ( k_max^3∫_t_n-1^t_n+1 p_tt dt )^1/2 + C h^s+1/ν p_n,β_s+1. |(p_n,β^h- q_n^h, ∇· v^h)|/∇ v^h ≤ k_n^-1 e_n,α^u + ν∇ e_n,β^u + C(θ) | u| _∞,1 |e_n^u| _∞,1 + C(θ) |e_n^u| _∞,1^2 + C(θ) k_max^3/2[ (∫_t_n-1^t_n+1 u_ttt_-1^2 dt )^1/2 + ( ∫_t_n-1^t_n+1∇ u_tt^2 dt )^1/2 + ( ∫_t_n-1^t_n+1 f_tt_-1^2 dt )^1/2 + ( |u| _∞,1 + |u| _∞,1,β) ( ∫_t_n-1^t_n+1∇ u_tt^2 dt )^1/2] + C(θ) ( k_max^3∫_t_n-1^t_n+1 p_tt dt )^1/2 + C h^s+1/ν p_n,β_s+1. By triangle inequality, (<ref>) and (<ref>) ∑_n=1^N-1k_n p_n,β - p_n,β^h^2 ≤ 2 ∑_n=1^N-1k_n p_n,β^h - q_n^h^2 + 2 ∑_n=1^N-1k_n p_n,β - q_n^h^2 ≤ C h^2s+2∑_n=1^N-1k_n p_n,β_s+1^2 + 2 ∑_n=1^N-1k_nk_n^-1 e_n,α^u^2 + 2 ∑_n=1^N-1νk_n∇ e_n,β^u^2 + C(θ) T | u| _∞,1^2 |e_n^u| _∞,1^2 + C(θ) T |e_n^u| _∞,1^2 + C(θ) k_max^4[ u_ttt_2,-1^2 + ∇ u_tt_2,0^2 + f_tt_2,-1^2 + ( |u| _∞,1^2 + |u| _∞,1,β^2) ∇ u_tt_2,0^2] + C(θ) k_max^4 p_tt_2,0^2 + Ch^2s+2/ν^2∑_n=1^N-1k_n p_n,β_s+1^2. By (<ref>) in Lemma <ref>, (<ref>), (<ref>), (<ref>) and (<ref>) ∑_n=1^N-1k_n p_n,β - p_n,β^h^2 ≤ C(θ) h^2s+2 k_max^4∑_n=1^N-1∫_t_n-1^t_n+1 p_tt_s+1^2 dt + C(θ) h^2s+2∑_n=1^N-1 (k_n+k_n-1) p(t_n,β) _s+1^2 + C(θ) (h^2s+2/ν^2 p_t_2,s+1^2+ h^2r u_t_2,r^2) + νexp[C(θ)/ν( k_max^4 u_tt_2,2^2+ |u|_2,2,β^2+ C(θ) ν T + 1 /h ν F_2^2) ] F_3 + exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] F_1 + Ch^2r |u| _∞,r^2+ C(θ) ν h^2r( k_max^4 u_tt_2,r+1^2+ |u| _2,r+1,β^2) + C(θ) T ( | u| _∞,1^2 + 1 ) { Ch^2r |u| _∞,r+1^2+Ch^2s+2/ν^2 |p| _∞,s+1^2 + exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u|_2,2,β^2 + C(θ) ν T + 1 /h ν F_2^2) ] F_3} + C(θ) k_max^4[ u_ttt_2,-1^2 + ∇ u_tt_2,0^2 + f_tt_2,-1^2 + ( |u| _∞,1 + |u| _∞,1,β) ∇ u_tt_2,0^2] + C(θ) k_max^4 p_tt_2,0^2 + Ch^2s+2/ν^2∑_n=1^M-1k_n p_n,β_s+1^2. ∑_n=1^N-1k_n p_n,β - p_n,β^h^2 ≤ C(θ) h^2s+2 k_max^4 p_tt_2,s+1^2 + C(θ) h^2s+2 |p| _2,s+1,β^2+ C(θ) (h^2s+2/ν^2 p_t_2,s+1^2+ h^2r u_t_2,r^2) + νexp[C(θ)/ν( k_max^4 u_tt_2,2^2+ |u|_2,2,β^2+ C(θ) ν T + 1 /h ν F_2^2) ] F_3 + exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u| _2,2,β^2) ] F_1+ Ch^2r |u| _∞,r^2 + C(θ) ν h^2r( k_max^4 u_tt_2,r+1^2+ |u| _2,r+1,β^2) + C(θ) T ( | u| _∞,1^2 + 1 ) { Ch^2r |u| _∞,r+1^2 + Ch^2s+2/ν^2 |p| _∞,s+1^2 + exp[ C(θ)/ν( k_max^4 u_tt_2,2^2 + |u|_2,2,β^2 + C(θ) ν T + 1 /h ν F_2^2) ] F_3} + C(θ) k_max^4[ u_ttt_2,-1^2 + ∇ u_tt_2,0^2 + f_tt_2,-1^2 + ( |u| _∞,1 + |u| _∞,1,β) ∇ u_tt_2,0^2] + C(θ) k_max^4 p_tt_2,0^2 + C(θ)h^2s+2/ν^2( k_max^4 p_tt_2,s+1^2 + |p| _2,s+1,β^2), which implies (<ref>). For constant step case, ε_n = 0 and the coefficients {β_ℓ^(n)}_ℓ=0^2 are β _2 = 1/4 (2 + θ - θ^2 ), β _1 = 1/2θ^2, β _0 = 1/4 (2 -θ - θ^2 ). p_n,β - p_n,β^h = β_2 (p_n+1 - p_n+1^h) + β_1 (p_n - p_n^h) + β_0 (p_n-1 - p_n-1^h) ≥ β_2 p_n+1 - p_n+1^h - β_1 p_n - p_n^h - β_0 p_n-1 - p_n-1^h = (β_2 - β_1 - β_0) p_n+1 - p_n+1^h + β_1( p_n+1 - p_n+1^h - p_n - p_n^h) + β_0( p_n+1 - p_n+1^h - p_n-1 - p_n-1^h) = θ/2(1-θ) p_n+1 - p_n+1^h + β_1( p_n+1 - p_n+1^h - p_n - p_n^h) + β_0( p_n+1 - p_n+1^h - p_n - p_n^h + p_n - p_n^h - p_n-1 - p_n-1^h) = θ/2(1-θ) p_n+1 - p_n+1^h + (β_1 + β_0) ( p_n+1 - p_n+1^h - p_n - p_n^h) + β_0( p_n - p_n^h - p_n-1 - p_n-1^h) p_n,β - p_n,β^h≥ β_2 p_n+1 - p_n+1^h - β_1 p_n - p_n^h - β_0 p_n-1 - p_n-1^h = θ/2(1-θ) p_n+1 - p_n+1^h + (β_1 + β_0) ( p_n+1 - p_n+1^h - p_n - p_n^h) + β_0( p_n - p_n^h - p_n-1 - p_n-1^h). By triangle inequality and (<ref>) ∑_n=0^N k p_n - p_n^h≤ ∑_n=1^N-1 k p_n,β - p_n,β^h + C(θ) k ( p_1 - p_1^h + p_0 - p_0^h) ≤ ∑_n=1^N-1 k ( C h^s+1 p_n,β_s+1+ p_n,β^h - q_n^h) + C(θ) k ( p_1- p_1^h+ p_0- p_0^h). C h^s+1∑_n=1^N-1 k p_n,β_s+1≤ C h^s+1( ∑_n=1^N-1 k p_n,β - p(t_n,β) _s+1 + ∑_n=1^N-1 k p(t_n,β) _s+1) ≤ C h^s+1{ C(θ) ∑_n=1^N-1 k^5/2( ∫_t_n-1^t_n+1 p_tt_s+1^2 dt )^1/2 + [ (N-1)k ( ∑_n=1^N-1 k p(t_n,β) _s+1^2) ]^1/2} ≤ C h^s+1( C(θ) [ (N-1)k ( ∑_n=1^N-1 k^2∫_t_n-1^t_n+1 p_tt_s+1^2 dt ) ]^1/2 + √(T) |p|_2,s+1,β) ≤ C(θ) √(T) h^s+1( k^2 p_tt_2,s+1 + |p|_2,s+1,β) By Cauchy-Schwarz inequality and (<ref>) in Lemma <ref> C h^s+1∑_n=1^N-1 k p_n,β_s+1≤ C(θ) √(T) h^s+1( k^2 p_tt_2,s+1 + |p|_2,s+1,β), ∑_n=1^N-1 k p_n,β^h - q_n^h≤ √(T)( ∑_n=1^N-1 k p_n,β^h - q_n^h^2)^1/2. The bound of term ∑_n=1^N-1 k p_n,β^h - q_n^h^2 is given in (<ref>). Thus we combine (<ref>), (<ref>) and (<ref>) to obtain ∑_n=0^N k p_n - p_n^h ≤ C(θ) √(T) h^s+1( k^2 p_tt_2,s+1+ |p|_2,s+1,β) + C(θ) √(T)(h^s+1/ν p_t_2,s+1+ h^r u_t_2,r) + √(ν T)exp[C(θ)/ν( k^4 u_tt_2,2^2+ |u|_2,2,β^2+ C(θ) ν T + 1 /h ν F_2^2) ] √(F_3) + exp[ C(θ)/ν( k^4 u_tt_2,2^2 + |u| _2,2,β^2) ] √(T F_1)+ C √(T) h^r |u| _∞,r + C(θ) √(ν T ) h^r( k^2 u_tt_2,r+1+ |u| _2,r+1,β) + C(θ) T ( | u| _∞,1 + 1 ) { Ch^r |u| _∞,r+1 + Ch^s+1/ν |p| _∞,s+1 + exp[ C(θ)/ν( k^4 u_tt_2,2^2 + |u|_2,2,β^2 + C(θ) ν T + 1 /h ν F_2^2) ] √(F_3)} + C(θ) √(T) k^2[ u_ttt_2,-1 + ∇ u_tt_2,0 + f_tt_2,-1 + √( |u| _∞,1+ |u| _∞,1,β)∇ u_tt_2,0] + C(θ) √(T) k^2 p_tt_2,0 + C(θ)√(T) h^s+1/ν( k^2 p_tt_2,s+1 + |p| _2,s+1,β) + C(θ) k ( p_1- p_1^h+ p_0- p_0^h), which implies (<ref>). § IMPLEMENTATION OF ADAPTIVE DLN ALGORITHM We present two ways of time adaptivity for the whole family of DLN methods. The first way is to use the local truncation error (LTE) criterion: we apply the revised AB2 method (herein AB2-like method) to estimate the error of the DLN scheme for NSE and adjust the time step according to the ratio of the required tolerance and the estimator. The second way is to adapt the time step to control the numerical dissipation. §.§ Local Truncation Error Criterion Given four previous solutions u_n^h,u_n-1^h,u_n-2^h,u_n-3^h, the AB2-like solution for NSE at time t_n+1 is u_n+1^h, AB2 = [ 1 + α_2 (t_n+1 - t_n) ( t_n+1 + t_n - 2 t_n-1,β )/2 ( t_n,β - t_n-1,β) k_n-1] u_n^h + (t_n+1 - t_n) /2 ( t_n,β - t_n-1,β)[ α_1 ( t_n+1 + t_n - 2 t_n-1,β )/k_n-1 - α_2 ( t_n+1 + t_n - 2t_n,β ) /k_n-2] u_n-1^h + (t_n+1 - t_n)/2 ( t_n,β - t_n-1,β)[ α_0 ( t_n+1 + t_n - 2 t_n-1,β )/k_n-1 - α_1 ( t_n+1 + t_n - 2t_n,β ) /k_n-2] u_n-2^h - α_0 (t_n+1 - t_n ) ( t_n+1 + t_n - 2t_n,β ) /2 ( t_n,β - t_n-1,β) k_n-2 u_n-3^h, and the estimators are T_n+1 = |G^(n)|/ | G^(n) + R^(n) | u_n+1^h, DLN-u_n+1^h, AB2, Absolute estimator T_n+1 = |G^(n)|/ | G^(n) + R^(n) | u_n+1^h, DLN-u_n+1^h, AB2/ u_n+1^h, DLN, Relative estimator where G^(n) =(1/2-α _0/2α _21-ε _n/1+ε _n)(β _2^(n)-β _0^(n) 1-ε _n/1+ε _n)^2+α _0/6α _2(1-ε _n/1+ε _n)^3-1/6, R^(n) = 1/12[ 2 + 3(1 - ε_n)/1 + ε_n( 1 - β_2^(n-2)1 - ε_n-1/1 + ε_n-1 + β_0^(n-2)1 - ε_n-2/1 + ε_n-21 - ε_n-1/1 + ε_n-1) × ×( 1 - β_2^(n-1)1 - ε_n/1 + ε_n + β_0^(n-1)1 - ε_n-1/1 + ε_n-11 - ε_n/1 + ε_n) +3(1 -ε_n)/1 +ε_n( 2/1 +ε_n-β_2^(n-2)1 -ε_n-1/1 +ε_n-11 -ε_n/1 +ε_n+β_0^(n-2)1 -ε_n-2/1 +ε_n-21 -ε_n-1/1 +ε_n-11 -ε_n/1 +ε_n) × ×( - β_2^(n-1) + β_0^(n-1)1 - ε_n-1/1 + ε_n-1) ]. R^(n) = 1/12[ 2 + 3/τ_n( 1 - β_2^(n-2)1/τ_n-1 + β_0^(n-2)1/τ_n-21/τ_n-1) × ×( 1 - β_2^(n-1)1/τ_n + β_0^(n-1)1/τ_n-11/τ_n) + 3/τ_n( 1 + 1/τ_n - β_2^(n-2)1/τ_n-11/τ_n + β_0^(n-2)1/τ_n-21/τ_n-11/τ_n) × ×( - β_2^(n-1) + β_0^(n-1)1/τ_n-1) ]. We refer to <cit.> for the derivation of the AB2-like method in (<ref>) and the estimator in (<ref>) and (<ref>). We use the step controller proposed by Hairer and Wanner in <cit.> to adjust the next time step k_n+1=k_n·min{1.5,max{0.2,κ(Tol/T_n+1)^1/3}}, where Tol is the required tolerance and the safety factor κ∈ (0,1] is selected to minimize the number of step rejections. If T_n+1 >Tol, then the DLN solution is rejected and the current step k_n is adjusted by (<ref>) for recomputing. We summarize the above adaptive DLN algorithm in Algorithm <ref> §.§ Numerical Dissipation Criterion The algorithm proposed by Capuano, Sanderse, De Angelis and Coppola <cit.> calibrates the step size to ensure the ratio of numerical dissipation and viscosity under the required value and its effect on the fully-implicit DLN scheme has been tested in <cit.>. Given the tolerance Tol, the maximum time step k_max, the minimum time step k_min, we compute the DLN solution and the ratio of numerical dissipation and viscosity χ_n+1 = ℰ_n+1^ ND / ℰ_n+1^ VD. If χ_n+1≤Tol, we accept the current solutions and double the time step. Otherwise, we halve the time step for recomputing. We summarize the algorithm in Algorithm <ref>. § NUMERICAL TESTS We apply the semi-implicit DLN algorithm in (<ref>) with θ = 2/3, 2/√(5), 1 for all numerical tests. θ = 2/3 is suggested in <cit.> to balance the stability and local truncation error. θ = 2/√(5) is recommended in <cit.> for stability at infinity (a property similar to L-stability). θ = 1 reduces to the midpoint rule. We use software FreeFem++ and Taylor-Hood (P2-P1) finite element space for programming. §.§ Convergence Test We use the Taylor-Green benchmark problem <cit.> to verify that the semi-implicit DLN algorithm is second-order accurate. In addition, its efficiency over the fully-implicit algorithm can be observed. The exact solutions in the domain Ω = [0,1]×[0,1] are u_1(x,y,t) = -cos(ωπ x) sin(ωπ y) exp (- 2 ω^2 π^2 t/τ ), u_2(x,y,t) = sin(ωπ x) cos(ωπ y) exp (- 2 ω^2 π^2 t/τ ), p(x,y,t) = - 1/4( cos(2 ωπ x) + cos(2 ωπ y) ) exp (- 4 ω^2 π^2 t/τ ). We set the parameters ω = 1, τ = 1/ν = 100. The initial value, boundary condition and source function f are determined by the exact solutions in (<ref>). We require that the constant time step k and mesh diameter h are the same to satisfy the time-diameter condition in (<ref>). We simulate the problem over the time interval [0,1]. The convergence rate R is calculated by R = log(error(k,h) / error(k/2,h/2))/log(2). The results of the semi-implicit DLN algorithm (with constant time step) are given in <ref>. We see that the semi-implicit DLN has third-order convergence in velocity and second-order convergence in pressure for all three θ values. Hence the semi-implicit DLN scheme has much better performance in the Taylor-Green problem than the theories in Subsection <ref> suggest. Then we apply the fully-implicit DLN scheme to the same problem and use fixed point iteration to solve the non-linear system at each time step. The error and convergence rate are given in <ref>. From the above tables, we observe that the semi-implicit DLN algorithm outperforms the fully-implicit algorithm for all three θ values in this test problem: The two schemes have almost the same error magnitude but the fully-implicit scheme takes twice the time to finish the simulation since the fully-implicit scheme takes two iterations on average at each time step. §.§ Adaptive DLN Algorithms for Revised Taylor-Green Problem We apply adaptive semi-implicit DLN algorithms in Section <ref> to the revised Taylor-Green problem in the domain Ω = [0,1] × [0,1]. The exact solutions are u_1(x,y,t) = -cos(ωπ x) sin(ωπ y) exp ( 2 ω^2 π^2 t/τ ), u_2(x,y,t) = sin(ωπ x) cos(ωπ y) exp ( 2 ω^2 π^2 t/τ ), p(x,y,t) = - 1/4( cos(2 ωπ x) + cos(2 ωπ y) ) exp ( 4 ω^2 π^2 t/τ ). We set ω = 1 and τ = 1/ν = 2500. The exact solutions in (<ref>) make the problem more difficult since the Reynolds number is much larger and the energy has an increasing pattern. We use both Algorithm <ref> and Algorithm <ref> to solve the problem over the time interval [0,60]. For Algorithm <ref>, we use the relative estimator in (<ref>) and set tolerance Tol = 1.e-7 and the safety factor κ = 0.95. For Algorithm <ref>, we set Tol = 1.e-14 for χ. The value of Tol is chosen to balance accuracy and efficiency. For both algorithms, we set the minimum time step k_min = 0.0005, the maximum time step k_max = 0.05, the initial time step k_0 = 0.0005 and the mesh diameter h = 1/180. The initial value, boundary value and body force are decided by the exact solutions. We measure the performance of two algorithms by evaluating energy, error of energy, numerical dissipation ℰ_n+1^ ND and viscosity ℰ_n+1^ VD. Since ℰ_n+1^ ND vanishes for the DLN method with θ = 1, we test the two adaptive algorithms with θ = 2/3 and θ = 2/√(5). <ref> shows the performance of two adaptive DLN algorithms and <ref> tells us the number of steps. Algorithm <ref> surpasses Algorithm <ref> in terms of accuracy and efficiency: <ref> shows that the error magnitude of energy is much smaller for Algorithm <ref> while Algorithm <ref> takes more number of time steps for both θ values. Two algorithms have similar patterns for numerical dissipation and viscosity in <ref>. For both algorithms, T_n+1 and χ_n+1 are kept below the required tolerance after the first few steps in <ref> and time steps oscillate between k_max and k_min in <ref>. §.§ 2D Offset Circles Problem We use the 2D offset circles problem proposed by Jiang and Layton <cit.> to verify the stability of the DLN scheme under any arbitrary sequence of time steps and the efficiency of the adaptive algorithms in Section <ref>. The domain Ω⊂ℝ^2 is Ω = { (x,y): x^2 + y^2 ≤ 1 and (x - 0.5)^2 + y^2 ≥ 0.01 }. The flow in the domain Ω is driven by the rotational body force f(x,y,t) = [ f_1(x,y); f_2(x,y) ] = [ -4y(1 - x^2 - y^2); 4x(1 - x^2 - y^2) ]. with the no-slip boundary condition on both circles. We set the Reynolds number Re = 1/ν = 200 and simulate the problem over time interval [0,60]. We use the relative estimator of LTE in (<ref>) for Algorithm <ref> and set the tolerance Tol = 0.001 and safety factor κ = 0.95. For Algorithm <ref>, we set Tol = 0.01 for ratio χ_n+1. For both adaptive algorithms, the initial time step k_0 = 0.005, the maximum time step k_max = 0.05 and the minimum time step k_min = 0.0005. The domain triangulation is generated by 80 nodes on the boundary of the inner circle and 320 nodes on the boundary of the outer circle. Since the exact solutions are unknown, we use the constant step DLN algorithm (ε_n = 0) with a small time step (k=k_min) and refined mesh (100 nodes on the boundary of inner circle and 400 nodes on the boundary of outer circle) for reference. <ref> show two domain triangulations. The number of time steps is presented in <ref>. <ref> shows that the energy of all algorithms is increasing at the start and then come to the steady level 23 at time t = 8. We deduce that Algorithm <ref>(θ = 2/√(5)) has worse performance for this problem because the energy level of this algorithm is low compared to that of other adaptive algorithms. In addition, the number of time steps is least for Algorithm <ref> (θ = 2/√(5)) while the ratio χ goes above the required tolerance value (1.e-2) for many times in the simulation. Then we compare <ref> and observe that the energy of Algorithm <ref>(θ = 2/√(5)) is closer to the energy of reference algorithms with less number of time steps. From <ref>, we can see that the numerical dissipation of Algorithm <ref> is at a level as low as that of reference algorithms while that of Algorithm <ref> is much larger. All the algorithms have similar viscosity patterns in <ref>. From <ref>, T is always below the required tolerance (1.e-3) thus the time steps of Algorithm <ref> never reach k_min. However the ratio χ goes above the required tolerance 1.e-2 frequently and k_n = k_min occurs very often. The primitive time step controller (doubling and halving time steps) in Algorithm <ref> reduces the number of time steps and may cause inaccuracy. § CONCLUSIONS We propose the semi-implicit DLN scheme for the NSE and avoid non-linear solvers at each time step. G-stability of the DLN method results in the long-term, unconditional stability of the numerical solutions. In the error analysis, we prove that both the velocity and pressure of the variable time-stepping, semi-implicit scheme converge in second order under very moderate time conditions. Two adaptive algorithms based on local truncation error and numerical dissipation criteria are presented to improve time efficiency in practice. The advantage of the semi-implicit DLN scheme is observed in numerical tests in Subsection <ref>: the semi-implicit scheme obtains the same accuracy as the fully-implicit scheme and reduces the simulation time by half. Subsection <ref> shows that two adaptive DLN algorithms obtain enough accuracy in energy and negligible numerical dissipation even the problem with a large Reynolds number has an increasing energy pattern. We verify in the 2D offset problem that the semi-implicit DLN scheme is unconditional, long-time stable in energy with any arbitrary sequence of time steps, and the adaptive DLN algorithm is much more efficient than constant time-stepping DLN scheme (taking less number of time steps and attaining similar magnitude in energy, numerical dissipation and viscosity). § ACKNOWLEDGEMENT The author thanks Professor Catalin Trenchea (Department of Mathematics, University of Pittsburgh) for very helpful suggestions and discussions. abbrv
http://arxiv.org/abs/2306.02903v1
20230605141028
Instruct-Video2Avatar: Video-to-Avatar Generation with Instructions
[ "Shaoxu Li" ]
cs.CV
[ "cs.CV" ]
[ Magnetic exchange interactions at the proximity of a superconductor Samir Lounis^1,2 July 31, 2023 =================================================================== type=figure < g r a p h i c s > figureGiven a short monocular RGB video and text instructions, our method optimizes a deformable neural radiance field to synthesize edited photo-realistic animatable 3D neural head avatars. The resulting head avatar can be viewed under novel views and animated with novel expressions. ] We propose a method for synthesizing edited photo-realistic digital avatars with text instructions. Given a short monocular RGB video and text instructions, our method uses an image-conditioned diffusion model to edit one head image and uses the video stylization method to accomplish the editing of other head images. Through iterative training and update (three times or more), our method synthesizes edited photo-realistic animatable 3D neural head avatars with a deformable neural radiance field head synthesis method. In quantitative and qualitative studies on various subjects, our method outperforms state-of-the-art methods. § INTRODUCTION With the development of neural 3D head reconstruction techniques, the synthesis and driving of head avatars have been easily appliable. Given a monocular RGB video, typical solutions reconstruct the head poses, camera parameters, and use the head images to optimize a neural radiance field(NeRF)<cit.>. Although abundant research on photo-realistic animatable 3D neural head avatars<cit.> makes it easy to apply one's photorealistic avatar in VR applications. There is a huge demand on pursuing personality stylization. But the technique that creating one's stylization avatar is still underexplored. For 3D stylization avatar creation, a typical process needs the modeling artist's operation with professional software. Some user-friendly methods emerged for automatic creation<cit.> or interactively creation<cit.>. Compared with traditional explicit models, implicit 3D representation can achieve higher render quality. But the implicit representation does not have explicit operating nodes like mesh surfaces, which makes manipulation easy. Although some methods try to enable manipulation for NeRFs<cit.>, it's still a tough task to edit NeRFs with user-friendly interaction. The creation of images has been popular with the development of diffusion models<cit.>. There's an increased demand for creating high-quality 3D scenes easily. To this end, we propose Instruct-Video2Avatar, a method for synthesizing edited photo-realistic digital avatars that require as input a short monocular RGB video and text instructions. For a head video in the wild, our method can synthesize stylization avatars that integrate the human identity with expected edit instructions, such as "Make him more happy" or "Turn him into the Tolkien Elf", as shown in Figure <ref>. Some text-to-3D methods optimize NeRFs directly with pre-trained 2D diffusion models<cit.>. But the quality and stability of scenes are far from commercial use. Some methods optimize existing NeRFs for editing<cit.>, which can achieve expected stylization or transformation. Instruct-NeRF2NeRF<cit.> is a NeRF-to-NeRF editing method with instruction. Instruct-NeRF2NeRF uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene. However, we found the editing is time-consuming and inefficient. Inspired by the dataset update of Instruct-NeRF2NeRF, our method gradually updates the radiance field with edited head images. We only use InstructPix2Pix one time and use the video stylization method to edit other images, which is more convenient and faster. In summary, our contributions are listed as follows: * We propose an approach for creating edited photo-realistic animatable 3D neural head avatars, enabling convenient avatar creation with video and text instructions. * We propose to update NeRF with video stylization methods, which can be extended to the editing of arbitrarily static or dynamic NeRFs, even for arbitrary video editing. * We demonstrate remarkable results of our method through extensive experiments. § RELATED WORK Image Editing. Image editing includes style transfer between artistic styles and translation between image domains. Starting from Gatys et al.<cit.>, image style transfer has been deeply studied. Plenty of extended works handled the task of high-quality style transfer<cit.>, fast style transfer<cit.>, geometry aware style transfer<cit.>, and applications<cit.>. With generative models, many methods manipulate the image by encoding the image into latent space, such as StyleGAN<cit.>. With pre-trained CLIP<cit.> model, text-to-image models can guide the editing of images with instructions. Recently, diffusion models<cit.> on image editing have been popular. Latent diffusion models<cit.> offer a convenient way to edit images in latent space. With multiple conditioned diffusion models<cit.>, methods that create images from multiple inputs emerge. For example, InstructPix2Pix<cit.> can edit an image with an image input and a text instruction. Video Editing. Similar to image stylization, video stylization aims at reproducing the content of a given video using style characteristics extracted from style images. Compared with image stylization, maintaining consistency between different frames is the core task. To deal with this task, some methods propose novel loss functions and networks modules<cit.> to style a video to the style of an image. Others<cit.> style the video with an exampler stylized video frame image. For some specific domains, motion retargeting is a hot topic, such as talking head video synthesis. Plentiful works aim to synthesize realistic or artistic talking videos through 2D or 3D methods<cit.>. Compared with text-to-image, text-to-video is a little tough due to the shortage of high-quality video datasets. Combining pre-trained text-to-image models and attention modules, some methods try to synthesize videos of realistic scenes<cit.>. Video diffusion<cit.> uses a space-time factorized U-Net with joint image and video data training. Some following works improve the design to generate high-definition videos<cit.>. But the quality and consistency of synthesis are far from commercial use. Deformable Neural Radiance Fields. NeRF(neural radiance field)<cit.> can synthesize high-quality novel view images with differentiable volumetric rendering. As an implicit method, the manipulation of the radiance field is not that convenient. A delicate way to edit the radiance field is to warp the NeRF in a canonical space to a deformed space<cit.>. With deformation, NeRF can represent dynamic scenes with temporal and spatial variation. For human facial modeling, a series of approaches have been proposed<cit.>, which leverage 3DMM facial expression code or audio features. NeRFace<cit.> models the implicit facial model with a low-dimensional 3DMM morphable model which provides explicit control over pose and expressions. In the optimization, the neural radiance field adds the embedded 3DMM expression code to the regular NeRF input. IMAvatar<cit.> represents the expression and pose related deformations via learned blendshapes and skinning fields. It employs ray marching and iterative root-finding to locate the canonical surface intersection for each pixel with a novel analytical gradient formulation. NHA<cit.> proposes a hybrid representation consisting of a morphable model for the coarse shape and expressions of the face, and two feed-forward networks, predicting vertex offsets of the underlying mesh as well as a view- and expression-dependent texture. And NHA provides a disentangled shape and appearance model of the complete human head (including hair) that is compatible with the standard graphics pipeline. INSTA<cit.> models a dynamic neural radiance field based on neural graphics primitives embedded around a parametric face model. INSTA employs a nearest triangle search in deformed space to compute the deformation gradient for mapping between canonical and deformed space. Editing of Neural Radiance Fields. For texture editing of NeRF, many methods emerge leveraging the 2D stylization methods<cit.>. Artistic or realistic texture style transfer can be accomplished with these methods, maintaining the 3D consistency of the NeRF scenes. For geometry-aware NeRF editing, manipulation and generation are the two main tasks. Manipulation mainly focuses on the interactive editing of a NeRF scene, leveraging the property of deformable neural radiance fields. These methods operate the scene leveraging on extracting explicit meshes<cit.> or manipulate the space by some space proxy<cit.>. The scene generation focuses on creating new scenes with or without an exampler. EditNeRF<cit.> edits a NeRF scene by encoding the scene into latent space. ClipNeRF<cit.> introduces a disentangled conditional NeRF architecture that allows individual control over both shape and appearance. By leveraging the recent Contrastive Language-Image Pre-Training (CLIP) model, ClipNeRF allows users to manipulate NeRF using either a short text prompt or an exemplar image. RODIN<cit.> proposes the roll-out diffusion network, which represents a neural radiance field as multiple 2D feature maps and rolls out these maps into a single 2D feature plane within which they perform 3D-aware diffusion. DATID-3D<cit.> proposes a novel pipeline of text-guided domain adaptation tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. NeRF-Art<cit.> introduces a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, they adopt a weight regularization method to effectively suppress the cloudy artifacts and the geometry noises when transforming the density field for geometry stylization. Instruct-NeRF2NeRF<cit.> gradually updates a reconstructed NeRF scene by iteratively updating the dataset images while training the NeRF. Given a NeRF of a scene, Instruct-NeRF2NeRF uses an image-conditioned diffusion model (InstructPix2Pix<cit.>) to iteratively edit the input images while optimizing the underlying scene. Inspired by Instruct-NeRF2NeRF, we propose Instruct-Video2Avatar, which also updates the train images. However, instead of editing all images with a diffusion model, we only edit one exampler image with InstructPix2Pix and edit other images with the video stylization method, which is faster with higher quality. § METHOD Our method takes as input a talking head video. Additionally, our method takes as input a natural-language editing instruction for avatar generation. As output, our method produces an edited version of a photo-realistic animatable 3D neural head avatar. Our method segments the head first and iteratively updates the images to accomplish the task. As shown in Figure<ref>, our method edits one exampler frame image with a natural-language editing instruction, e.g., "Turn him into the Tolkien Elf". For unedited head images, the editing is accomplished by a video stylization method with the edited image. With all edited images and parameters extracted from the original video images, our method produces a photo-realistic edited animatable 3D neural head avatar. Our method builds off three parts, diffusion models for image editing, video stylization from one exampler frame, photo-realistic 3D neural head avatar from a video, specifically InstructPix2Pix<cit.>, EbSynth<cit.> and INSTA<cit.>. §.§ Background §.§.§ Instant Volumetric Head Avatars Neural radiance field<cit.> based head synthesis has attracted much attention for its high quality. INSTA<cit.> is a neural radiance field based method for photo-realistic digital avatars reconstructing. In practice, INSTA is trained on a single monocular RGB portrait video and can reconstruct a digital avatar that extrapolates to unseen expressions and poses. For a given video, inputs for avatar optimization include head images I_i, the intrinsic camera parameters K∈R^3× 3, tracked FLAME<cit.> meshes {M_i}, facial expression coefficient {E_i}, and head poses {P_i}. INSTA constructs a neural radiance field in the canonical space to represent the head. The deformation of the radiance field is training for corresponding with facial expressions. Differentiable volumetric rendering is used to optimize the radiance field. With the optimized deformable radiance field, the resulting head avatar can be viewed under novel views and expressions. §.§.§ InstructPix2Pix Denoising diffusion models have been popular recently. As generative models, an image is gradually formulated from a noisy sample. InstructPix2Pix<cit.> is a diffusion-based method for editing images with text instructions. For editing images with instructions, InstructPix2Pix trains a conditional diffusion model with a paired training dataset of unedited images, edited images, and text instructions. The model is based on Stable Diffusion, the popular text-to-image latent diffusion model. Conditioned on an image c_I and a text editing instruction c_T, edited image z_0 can be optimized from noise image z_t using the denoising U-Net ϵ_θ: ϵ̂ = ϵ_θ (z_t;t,c_I,c_T) The denoising process predicts the noisy add to the noise image z_t, which can be used to estimate edited image (̂z_0) over timesteps t∈ T. The variance of edited images increases with timesteps t. In practice, InstructPix2Pix is a latent diffusion based method, which operates the diffusion process in latent space. The process of editing an image is encoder-diffusion-decoder. For image editing with two conditions, InstructPix2Pix introduces two guidance scales s_I and s_T to trade off the correspondence between the input image and text instruction. For details, we direct the reader to the original paper<cit.>. §.§.§ EbSynth Video style transfer aims to implement style transfer with videos, maintaining the stylization quality and the consistency between frames. Exampler-based image video style transfer methods alter the whole video with characteristics from the exampler frame, which has been edited. EbSynth<cit.> is a synthesizer that can transform full video over an edited frame. EbSynth uses a novel implementation of non-parametric texture synthesis algorithms. EbSynth uses a soft segmentation guide and a positional guide which is essentially a dense warp field that maps every target pixel to its corresponding position in the source. For the task of video stylization, EbSynth can produce stylization images from one frame stylization image. Edited images produced by EbSynth can achieve high quality. The pixel mapping is executed on images, which ignores the 3D geometry. For video stylization, obvious geometric inconsistency can be observed in consecutive video frames. §.§ Instruct-Video2Avatar Given a short monocular RGB video, neural radiance field based photo-realistic digital avatars can be reconstructed. Our method works by updating the training head images, which maintains consistency between frames. We use a diffusion model to edit one sampler head image, and a video stylization method to edit other images. In this section, we first illustrate the process of editing rendered images, then describe the iterative Dataset update strategy, and at last discuss the implementation details of our method. The following mentioned head images are segmented from original videos, which do not include background and body. Editing Rendered Images We use InstructPix2Pix<cit.> to achieve the editing target. As a conditional diffusion model, it takes an unedited image c_I, a text instruction c_T, and a noisy z_t as input. The text instruction c_T is subjective and the noisy z_t is random. For a head image sequence, we select one sample frame image I_s as the input of InstructPix2Pix. I_s^e denotes the edited image. And then we use EbSynth to edit other images I_i^e, i∈ 1 ⋯ N, i≠ s, with I_s^e as a reference. One-time image editing with InstructPix2Pix costs about 5 minutes on an RTX 3090 GPU. For editing a NeRF scene, executing editing with diffusion models is time-consuming. For video-to-avatar methods, a video consists of thousands of images, and it is hard to edit images independently like InstructNeRF2NeRF<cit.>. In addition, the details of the edited images vary with the input images with the same text prompt, even using one seed. NeRF assumes all the images are from one scene. Inconsistent training images make the convergence of the training hard. This leads to more noise and iteration times for training. Considering this, we propose to edit one sampler image I_s^e, and use EbSynth to obtain other edited images I_i^e, i∈ 1 ⋯ N, i≠ s. Iterative Dataset Update Even though we use EbSynth to ensure consistency between different frames, unnatural distortion from EbSynth lead to noise in the neural radiance field. For high-quality synthesis, we propose an iterative dataset update. We only edit the sampler image once and execute iterations on other images. In the first training, the editing is carried on the head images from the original video. In the later training cycle, the editing is carried out on the rendered images from the optimized head avatar. The neural radiance field promises the 3D consistency of the scene. The EbSynth helps with the rendering quality of the avatar. The whole quality increase with the iteration. After several iterations, we can obtain a photo-realistic edited animatable 3D neural head avatar. §.§ Implementation Details For InstructPix2Pix, the guidance weights corresponding to the text and image conditioning signals s_I and s_T can be used to adjust the edit strength. For practicality, the edited frame shall promise the expression the same as the unedited frame. For exampler frame selection, we suggest choosing one head image with mouth open. Editing one frame with mouth open can guide other frames editing better. We suggest using s_I=1.5 and s_T=3.5, with 100 denoising steps. The whole process of avatar creation consists of InstructPix2Pix for exampler editing, EbSynth for image editing, INSTA for avatar optimization. It costs less than 20 minutes for one training, excluding the preprocess of the video for facial parameters. § RESULTS We conduct experiments on some real scenes preprocessed by INSTA. The size of each dataset (one video) ranges from 2000-3000 images. We execute EbSynth on Windows with the official executable file. Expect for EbSynth, we execute other experiments on Ubuntu equipd a GPU RTX 3090. First, we evaluate our method through qualitative evaluations. To evaluate the design of our approach, we compare our method with some baselines. We also conduct a perceptual study to illustrate the superiority of our method. For better evaluation, we recommend the reader evaluate the performance through the supplemental videos. §.§ Qualitative Evaluation Our qualitative results are shown in Figure <ref> and Figure <ref>. For different image2image methods, the edited images vary. Our method mainly focuses on the consistency of rendered images of edited avatars. To eliminate the influence of image2image methods, we use InstructPix2Pix as the basis of experiments. For each edit, we compare our method with InstructPix2Pix+One Seed, InstructPix2Pix+DaGAN, and InstructPix2Pix+EbSynth in Figure <ref>. The first row shows the extracted human head images from the videos, and the latter rows show edited synthesized human head images by different methods. The edited results of InstructPix2Pix vary with the input image, random seed, and guidance weights. For the results of InstructPix2Pix+One Seed, we fix the random seed and guidance weights. Even though, the edited results are apparent inconsistency due to the poses and expressions of the head. With DaGAN, the edited image consistency increases a lot. But the image quality is inferior and there are significant inconsistencies before and after editing. For example, "The Hulk" can hardly open his mouth and the eyes of the "17 years old man" open unexpectedly. With EbSynth, the edited images are sharpest with good quality and are consistent with the original images. Some noises exist in the edited results. For example, there are noises in the mouth of the "anime man". Our method produces images with good quality. Some shadow noises exist around the avatar head, which are caused by the radiance field. The mouth expressions vary with DaGAN, EbSynth, and our method. It's hard to compare the 3D consistency merely from images. We recommend the reader evaluate the performance through the supplemental videos. §.§ Ablation Study We validate the architecture of our method by ablation study. The qualitative results are shown in Figure <ref>. Different from the results in Figure <ref>, the first column in Figure <ref> shows the rendered images with training images from the original video. Subsequent columns show corresponding images using different variants. These variants all synthesize 3D neural head avatars, which can be used in virtual reality. Zooming results in red boxes shows the details of image quality, in which our method is superior. The consistency is hard to evaluate from images One time Dataset Update with One Seed (One time Dataset Update with Modified Instruct-NeRF2NeRF). For editing the avatar, we perform one time Dataset Update with one seed, in which all training images are edited by InstructPix2Pix with one seed, and the avatar is trained to fit those edited images. In this way, the optimization process can be regarded as modified Instruct-NeRF2NeRF. Different from the original Instruct-NeRF2NeRF which uses a random noise from a constant range, we fix the noise to decrease the inconsistency. The consistency of edited images determines the quality of the avatar. As a diffusion-based method, the editing results usually have a lot of inconsistency caused by input inconsistency and random seed. For this baseline, we use one seed to enhance the consistency of per-frame editing results. But the inconsistency caused by training image discrepancy exists. Inconsistent training images can lead to blurry and artifact-filled results, as shown in Figure <ref>. One time Dataset Update with EbSynth. For editing the avatar, we perform one time Dataset Update with EbSynth, in which one exampler training image is edited by InstructPix2Pix, and the editing is propagated to other training images by EbSynth. Then the avatar is trained to fit those edited images. Compared with one time Dataset Update with one seed, EbSynth significantly enhances the consistency of per-frame editing results. Compared with our pipeline which updates the Dataset three times, the quality of render images is slightly worse, as shown in Figure <ref>. §.§ Perceptual Study The task of avatar editing is subjective. We do not have the ground truth. For some editing results, the face shape is far from a normal human. Evaluating the quality of these methods is a challenging task. We recommend the reader evaluate the performance through the supplemental videos. We conducted a perceptual study to evaluate our method. For frame consistency, some methods produce sharp and photo-realistic images which are hard to compare. We present videos generated by our method and other methods to the raters. We ask the raters to rate the given video on "High Definition", and "Temporal Consistency" from 0 (worst) to 5 (best). We recruit 20 participants to score on each question with 10 edited videos. Table <ref> shows the results. For "High Definition", InstructPix2Pix+EbSynth gets the highest score and ours is the second-highest. For "Temporal Consistency", ours gets the highest score and One time Dataset Update with EbSynth is the second-highest. §.§ Limitations Our method inherits some limitations of InstructPix2Pix. For some expected editing results, large spatial manipulations may lead to expression inconsistency. INSTA tries to optimize a head avatar for humans, which assumes the surface is close to the face. For some editing instructions that add some objects, our method may produce bad results. We demonstrate two failure cases in Figure <ref>: (1) InstructPix2Pix succeeds at editing for the instruction but fails to maintain the expression, and (2) our method succeeds at editing in texture, but the glasses is not independent of the deformable face. § CONCLUSION In this paper, we propose Instruct-Video2Avatar, a novel approach that synthesizes edited photo-realistic digital avatars. Our method takes a short monocular RGB video and text instructions as input and uses an image-conditioned diffusion model (InstructPix2Pix) to edit one exampler head image. And then our method uses a video stylization method (EbSynth) to accomplish the editing of other head images. By iterative training and dataset update of the avatar, our method synthesizes edited photo-realistic animatable 3D neural head avatars. In quantitative and qualitative studies on various subjects, our method outperforms state-of-the-art methods. Our method can be extended to the editing of arbitrarily static or dynamic NeRFs, even for arbitrary video editing.
http://arxiv.org/abs/2306.04953v1
20230608060113
Scenic Routes over Points in 2D Space
[ "Loay Rashid", "Lini Thomas", "Kamalakar Karlapalem" ]
cs.CG
[ "cs.CG" ]
Scenic Routes over Points in 2D Space Loay Rashid DSAC IIIT-Hyderbad Hyderabad, India [email protected] Lini Thomas DSAC IIIT-Hyderbad Hyderabad, India [email protected] Kamalakar Karlapalem DSAC IIIT-Hyderbad Hyderabad, India [email protected] July 31, 2023 ==================================================================================================================================================================================================================================================== Consider a 2D coordinate space with a set of red and a set of blue points. We define a scenic point as a point that is equidistant to a red point and a blue point. The set of contiguous scenic points form a scenic path. The perpendicular bisectors to the line joining a red point and a blue point forms a scenic path between the red point and the blue point. The scenic perpendicular bisectors between different pairs of a red point and a blue point can intersect, forming a scenic graph of intersection points and edges. A scenic route is a traversal on a scenic graph. In this paper, we address this novel problem by (i) designing algorithms for scenic route generation, (ii) studying the algorithms' different properties and (iii) analysing the routes generated by these algorithms. Scenic routes have applications in geo-spatial visualizations and visual analytics. Scenic Routes, Graph Traversals, 2D point configurations, Scenic Points, Equidistant § INTRODUCTION §.§ Our Motivation Humans have an implicit awareness of scenic beauty when traveling over various routes. The question arises: How do we transfer our scenic awareness over to two-dimensional data? Here, scenic beauty is defined within the context of a user and a set of viewpoints. Given a context, one can define scenic beauty to be a uniform and balanced view of points of interest. In particular, we define a point to be one that provides us with an equidistant view of at least two points of interest with the purpose of achieving balanced views. Thus, the onus in this work is to show the nature of scenic routes in a two-dimensional environment. We consider points of interest to be either red or blue colored, with scenic beauty being an equidistant view of a red and a blue point. In particular, we introduce scenic routes to pairs of (red, blue) points and give illustrative situations where such routes are viable in the real world. Moreover, we consider the scenic quality of these routes and present different characteristics of the scenic beauty offered by a route. Consider the Giza Necropolis as shown in Fig. <ref>, specifically the pyramid of Khufu and the pyramid of Khafre. Treating the tops of the two pyramids as points of interest (shown in blue), we can draw a scenic route (shown in pink) through the necropolis. Traveling on this scenic route will guarantee an equidistant view of both points of interest, giving us a balanced (scenic) view of both pyramids. Highlighted in green are two points on the scenic route. The city roads and the scenic route intersect at these points: such points are accessible to the viewer and give a scenic view of the pyramids. §.§ Scenic Route Problem Consider a rectangular region with N red and M blue points. We define a scenic point as a point that is equidistant to a red point and a blue point in a pair of red-blue points. A path on which each point is scenic is termed as a scenic path. Likewise, a collection of scenic paths forms a scenic route. Each point on the perpendicular bisector of a line joining a pair of red-blue points is scenic. Therefore, the entire perpendicular bisector is a scenic path. Scenic routes are generated by putting together segments of these bisectors/scenic paths. The intersection points of multiple bisectors offer multiple scenic views as they produce an equidistant view to multiple pairs of points. Moreover, these intersection points offer opportunities to change directions, i.e., move from one bisector to another while maintaining a scenic view. Sets of red and blue points can also be considered as two separate classes in a dataset. In a classification problem, the scenic routes across points from two classes may provide insight regarding the closeness and relation between the classes. In this paper, we design methods to find scenic routes in 2-dimensional space. Since the routes must be scenic, we can ignore the actual red-blue points and only focus on the bisectors and the points where the bisectors intersect. Moreover, we can transform this set of intersection points and scenic segments between them into a graph connected by the bisectors forming a "scenic graph." A traversal over such a scenic graph would give us a scenic route. §.§ Formulation Let the set of red points be R, the set of blue points be B, and the set of points where the bisectors intersect be I_P. Segments of the bisectors between these intersection points form the edge set E_P, which contribute as edges of the scenic paths. The weight of each edge is the Cartesian distance between its endpoints. The graph over all the intersection points is G(I_P, E_P). Moreover, for a selected route; let the set of points on a route be S ⊆ I_P, and the edges within the route be E ⊆ E_P. We represent a scenic route as a graph using the notation: G(S, E). We explain the problem further by using Figure <ref>. Consider the point configuration containing one red point {R_1} and three blue points {B_1, B_2, B_3}. {M_1, M_2, M_3} represent the midpoints of the lines joining them. The green lines are the perpendicular bisectors of these lines. The green lines represent scenic paths. The intersection points of these perpendicular bisectors are {IP_1, IP_2, IP_3}. Therefore, to obtain a scenic route, we consider the perpendicular bisectors. Further, to move from one bisector to another, we use their intersection points. In this case: I_P = {IP_1, IP_2, IP_3} E_P = {[IP_1, IP_2], [IP_2, IP_3], [IP_3, IP_1]} The corresponding scenic graph is G(I_P, E_P). Edges and points within a scenic route are in pink. §.§ Paper Contribution and Organization The contribution of this paper is: (1) articulation of the scenic routes problem, (2) introduction of scenic routes (based on the equidistant principle) in two-class point configurations in 2D spaces, (3) design scenic route generation algorithms fulfilling different requirements. Note that the problem is challenging in 2D space with two sets of points. Increasing the number of dimensions and the number of colors is beyond the scope of this paper. In Section <ref> we delve into the meaning of scenic routes, lay down requirements for a route to be considered as scenic, and deal with a possible spatial restriction that can be applied to I_P. Section <ref> contains objectives, explanations, and the motivations behind each scenic route generation algorithm. Section <ref> then discusses how well the routes generated using our algorithms perform on the scenic requirements. § SCENIC ROUTES/TRAVERSALS A scenic route is defined as a route over I_P via the bisectors (since it is over I_P and we only use bisectors, it is guaranteed to consist of scenic paths). Many such routes with different characteristics are possible. Therefore, there needs to be a characterization of the properties of a scenic route. A scenic route has the following requirements, in decreasing order of importance: * Only Scenic: The route must consist of scenic paths. Any non-scenic path is distracting and must be avoided. * Completeness: Travelling on the route must allow one to have a view of a large number of red-blue pairs. It is preferable for a route to give a view of all red-blue pairs (that is, all |R|·|B| pairs). The ability to view a larger number of red-blue pairs on a scenic route would add to the scenic beauty offered by the route. Ideally, all scenic views for all pairs of red-blue points must be covered by a scenic route. * Minimal Edges: A route must not have a large number of edges. Traveling on a route must allow for long, uninterrupted stretches of scenic points. In other words, there should not be a large number of direction changes within a route. * Minimal Repeated Edges: A route must minimize the number of repeated edges. Repeated edges are defined as stretches of bisectors that must be traveled multiple times (repeated) to complete the entire route. Repeated edges come into play in order to produce a closed path to return to the starting point. Repeated edges within scenic routes offer the same view multiple times. These edges unnecessarily increase the total length of the scenic route without offering any novel scenic views. Hence they should be minimized. In Fig. <ref>, the edge [α, β] is an example of a long repeated edge. If a user was to travel on this edge, the user would also need to travel back on the same edge, i.e., repeat the edge. To better understand why the above requirements and the ordering of these requirements generates a preferred scenic route, consider a person walking on a scenic route. * At all points during travel, the person should have a scenic view available (Req. <ref>). * Given that the red-blue points are points of interest, a person would want to view as many (preferably all) red-blue points as possible (Req. <ref>). * A person walking on a route would dislike a large number of direction changes, and would prefer longer paths to not distract from the scenic beauty available. (Req. <ref>). * Finally, a person would not want to traverse on the same path repeatedly because repeated edges do not offer any additional novel views, therefore unnecessarily increasing the total distance that needs to be traversed (Req. <ref>). A route that fulfills all four requirements listed above is a preferred scenic route. A route that fulfills fewer than four requirements is still scenic; however, such a route would not be preferred. For instance, a route may fulfill the first three requirements but include some repeated edges. Moreover, the importance of each of these requirements and their tradeoffs must be taken into account when generating scenic routes. Having no repeated edges but viewing a small number of red-blue pairs is worse than having a number of repeated edges but viewing a large number of red-blue pairs. §.§ Spatial Restrictions The perpendicular bisectors between two pairs of red-blue points can be almost parallel, generating intersection points far from the red-blue pairs themselves. While the view given by such points is still scenic according to our definition, it is not a practical view. In such cases, it makes sense to remove such distant intersection points from the set I_P. We remove these intersection points by using a bounding box around the set of red and blue points. The bounding box does not need to be a minimum bounding box; it can be made larger or smaller by some quantity δ. Refer to Fig <ref> for an example of a configuration restricted by a bounding box. § SCENIC ROUTE ALGORITHMS As mentioned in Section <ref>, the problem is to generate scenic routes over the set of bisectors and intersection points. In this section, we design two scenic route generation algorithms and their underlying motivations and objectives. Each algorithm needs the graph G(I_P, E_P), where I_P is the set of intersection points of the bisectors and E_P is the set of edges (scenic paths) between these intersection points. The algorithms designed require knowing the shortest path that connects a pair of points. This shortest path is computed using the Floyd-Warshall all-pairs shortest path algorithm. We use the notation d(a,b) to denote the distance between two points taken from the precomputed results of the Floyd-Warshall algorithm. The algorithms use the pre-calculated output of the All Pairs Shortest Path(APSP) Algorithm on G(I_P, E_P). We present the below algorithms that determine scenic routes. We will later analyze the presented algorithms to understand whether they satisfy the requirements of a preferred scenic route. §.§ Min-Max Hull The Min-Max Hull algorithm has two parts. First, a Min-Max function selects the intersection points to be considered. Next, the Min-Max Hull Algorithm connects the selected intersection points to create a closed scenic route. §.§.§ Min-Max Function: Given a set of intersection points I_P, the Min-Max function focuses on choosing the set of points that should be considered to create a scenic route. The algorithm itself is a refined version of the core idea of the CURE (Clustering Using REpresentatives) <cit.> algorithm. Let S be the set of points in a scenic route (initially ∅). The idea behind the Min-Max function is to find new points x ∈ I_P-S that are neither too close to S, nor too far away from it. If we chose points that are too close to S, we would end up with cluttered routes and perform worse on Requirement <ref> of scenic routes. On the other hand, if we chose points that are too far from S, we would end up with routes that go too far from the red-blue points to get meaningful scenic views. The concept of first maximizing the distance and then choosing the closest (minimal distance) point allows us to do precisely that. The function finds potential intersection points to be added to the set S where S is the set of intersection points in the output scenic route. Potential intersection points are selected using: x = _x_i ∈ (I_P-S)(max_a ∈ S(d(x_i,a))) where d(.,.) is the all pair shortest path distance between two nodes and S is the set of intersection points in the output scenic route. §.§.§ Min-Max Hull Algorithm The objective of the Min-Max Hull algorithm is to get a convex scenic route with no repeated edges that satisfies the distance bound. Convex scenic routes perform well on two scenic requirements: they have fairly long, uninterrupted views (Req. <ref>) and have no repeated edges (Req. <ref>). Min-Max Initialization: Let x_1, x_2 be two points in the set I_P which have the least distance between them. Then, S = [x_1, x_2]. In this algorithm, we keep an upper bound B on the total route distance to control the length of the routes generated. The Min-Max Hull Algorithm first implements the Min-Max initialization described above. Then the algorithm repeatedly implements the following two steps (steps <ref>, <ref>) until the distance bound B is reached: We first generate the convex hull on the set of points S using the QuickHull algorithm <cit.>. We consider the points in the sequence that the QuickHull Algorithm provides us with. However, a bisector that connects two consecutive points computed by the hull may not exist. In such cases, the APSP algorithm is used to find the intermediary nodes (and the edges introduced by them) that connect the two intersection points. On including these extra edges, the final output may not be a convex hull but an altered version of a convex hull, as can be seen in Figure <ref>. One can also use the Graham scan algorithm instead of the QuickHull algorithm to incrementally generate the convex hull of S. §.§ Densest Line Algorithm An intersection point is a point where two bisectors meet. Hence, an intersection point gives scenic views of at least two pairs of red-blue points. A bisector can have several intersecting points on it and hence is a path that represents views to multiple red-blue pairs. The objective of the Densest Line algorithm is to identify such long, straight, uninterrupted scenic paths which reduce directional changes and maximise the number of views. However, a route solely consisting of long bisectors would perform very poorly on the repeated edges requirement (Req. <ref>). Therefore, there need to be some connecting paths between the long bisectors that allow users to view other red-blue pairs without traversing back on the same long bisector. It would make sense to connect the endpoints of the long bisectors using a hull so that once a user reaches the end of a bisector, the user can shift to another bisector using the hull instead of having to backtrack on the same bisector. In particular, we choose alpha shapes <cit.> as our hull algorithm. Alpha Shapes are a generalization of convex hulls over a set of points. While convex hulls are limited to being strictly convex, alpha shapes allow flexibility and can generate concave hulls on a set of points. This can be done by changing the value of the parameter α (α = 0 for convex hulls). Specifically, for a point set S, the alpha shape of S is the intersection of all closed discs with radius 1/α that contains all the points of S. The alpha shape is generated over the set of endpoints of the long bisectors, thereby connecting the bisectors. The scenic route is comprised of both the dense bisectors and the edges introduced as a result of the alpha shape hull. The user has a choice of whether to continue on the same dense line or shift to another dense line using the edges introduced due to the alpha shape. We consider the top K bisectors with the largest number of intersection points. K is a user input mentioning the number of bisectors that need to be considered for creating the scenic route. Increasing K will give more distinct scenic views (Requirement <ref>) but will also increase the possible directional changes and also the total route distance. The Densest Line Algorithm takes the following as input: K representing the number of bisectors to be considered. Figure <ref> shows an example of the densest line algorithm and the alpha shape hull where the pink lines represent the top K densest lines, and the blue lines are the lines generated on creating the alpha shape as described earlier. Repeated Edges: One drawback of using an alpha shape to draw the hull is that there is a possibility of repeated edges occurring. Consider the set F and sequence Λ from steps (<ref>) and (<ref>) of the algorithm above. Using an alpha shape to get the sequence Λ from the set F does not guarantee that each point in F will also be in Λ. As mentioned earlier, the scenic route is comprised of the points connected using the alpha shape and the K densest lines. The points in F - Λ will not be connected using APSP (since they are not in the alpha shape) and may become repeated edges. Refer to Fig. <ref>, where edge [α, β] is a repeated edge. Alpha shapes are preferred over convex hulls for this algorithm because generating a convex hull leads to higher chances of repeated edges occurring. Since we generate the set of points using convex hull algorithms but connect these points using the shortest scenic paths (APSP predecessors) between them, we may miss out on several bisector endpoints. Consider Fig. <ref>. Due to endpoints p, q, the endpoints α, β are not included in the convex hull leading them to become repeated edges, as a user traveling from β to α has no choice but to travel back to β from α. § ALGORITHM DISCUSSION As mentioned in Section <ref>, there are requirements for a route to be considered scenic. We now discuss the scenic nature of the routes generated by the algorithms in Section <ref>. Only Scenic (Refer requirement <ref>): Since both the algorithms generate routes on the bisectors of lines joining red-blue pairs, all the routes generated are compulsorily scenic. Therefore, both Densest Line and Min-Max Hull fulfill this requirement. Completeness (Refer requirement <ref>): A scenic route must give a view of many red-blue pairs (preferably all). For all the algorithms presented, the number of scenic views offered can be tweaked using the user inputs: B, the distance bound in the Min-Max Hull algorithm, and K, the number of dense bisectors in the Densest Line algorithm. However, because the number of scenic views is dependent on the user inputs (K and B), we cannot make any statements regarding how well scenic routes generated by each algorithm fulfill the Completeness requirement. The completeness requirement will be fulfilled if K and B are sufficiently large. When K is the total number of bisectors or when B is ∞, the scenic route will give views of all the red-blue pairs. Minimal Edges (Refer requirement <ref>): It is desirable that a scenic route should have few direction changes and should contain long uninterrupted stretches of scenic views. * Min-Max Hull: The convex hull contains a large number of fairly straight, long lines. Since the route comprises only the convex hull, the routes contain straight and long lines, leading to few direction changes. * Densest Line: While the algorithm is motivated by trying to make routes with minimal edges (it picks out long, straight paths in routes), the alpha shape that is plotted on top can give rise to unnecessary direction changes. While both algorithms perform well on this requirement, the Min-Max Hull algorithm outperforms Densest Line due to the reasons highlighted above. Minimal Repeated Edges (Refer requirement <ref>): A scenic route has a small number or no repeated edges. * Min-Max Hull: Because the outer hull is generated using a convex hull, this is a good algorithm for this metric. No repeated edges will exist due to the properties of the convex hull. * Densest Line: Because the alpha shape is used to generate the outer hull, there is a slight possibility of a repeated edge occurring (refer to Fig. <ref>, point 11 is connected to a repeated edge). In case no outer hull is generated, almost all the edges will be repeated edges (refer to Fig. <ref>). Therefore, the Min-Max Hull algorithm (<ref>) outperforms Densest Line on this requirement as well. Overall, the Densest Line algorithm is better if one wants to have routes that have long, view-dense stretches, while the Min-Max Hull algorithm is better if one wants to have a shorter route that gives a large number of views. § ILLUSTRATIONS We illustrate our algorithms by generating scenic routes on a real-world example, similar to the illustration in Fig. <ref>. For these illustrations, we remove the red-blue pairs and instead consider scenic points to be equidistant to any two points of interest. The example chosen is the Capitol Hill area in Washington DC, USA. The geographic area along with the chosen points of interest are highlighted (green) in Fig. <ref>. The points of interest in the dataset are naturally symmetric, reinforcing the concept of scenic routes based on the equidistant principle. We present scenic routes generated using the Densest Line (Section <ref>) algorithm. §.§ Capitol Hill, USA The seven points of interest chosen are the US Capitol, the Supreme Court, the Library of Congress, the National Museum of the American Indian, the National Gallery of Art, and the Garfield and Peace Monuments. There are 42 bisector intersection points within the bounding box. A scenic route generated using the Densest Line algorithm (Refer to Sec. <ref>) can be found at Fig. <ref>. §.§ Discussion The Capitol Hill contains city roads that provide routes to the points of interest. The scenic paths in the scenic routes generated by the densest line algorithm match many of the city roads and hence assure the "scenic" aspect of these roads. Other scenic paths intersect city roads at specific points; therefore, those roads give scenic views of points of interest at some locations. § CONCLUSION Given a set of red and blue points in 2D space, a scenic point is equidistant to a red point and a blue point. We develop the problem of scenic points and scenic paths to scenic routes and traversals. In this paper, we introduced the concept of scenic routes in two-class point configurations in 2D spaces and a characterization of the properties of a scenic route. We present two scenic route generation algorithms and analyze the routes generated by these algorithms. Finally, we generate scenic routes on the Capitol Hill area. Ours is a preliminary work on this problem that opens up exciting theory and practical implementation challenges to design and analyze scenic route algorithms. The notion of equidistant scenic points provides a balance and beauty to a scenic path. There can be other definitions of scenic points with corresponding scenic routes, but the properties and strategies of densest line and min-max may not change, making them universal approaches to address the problem. ACM-Reference-Format
http://arxiv.org/abs/2306.07986v1
20230609123334
A high-order fully Lagrangian particle level-set method for dynamic surfaces
[ "Lennart J. Schulze", "Sachin K. T. Veettill", "Ivo F. Sbalzarini" ]
cs.CE
[ "cs.CE", "cs.NA", "math.NA" ]
Partial disentanglement in a multipartite system Eyal Buks July 31, 2023 ================================================ We present a fully Lagrangian particle level-set method based on high-order polynomial regression. This enables closest-point redistancing without requiring a regular Cartesian mesh, relaxing the need for particle-mesh interpolation. Instead, we perform level-set redistancing directly on irregularly distributed particles by polynomial regression in a Newton-Lagrange basis on a set of unisolvent nodes. We demonstrate that the resulting particle closest-point (PCP) redistancing achieves high-order accuracy for 2D and 3D geometries discretized on highly irregular particle distributions and has better robustness against particle distortion than regression in a monomial basis. Further, we show convergence in a classic level-set benchmark case involving ill-conditioned particle distributions, and we present an application to an oscillating droplet simulation in multi-phase flow. § INTRODUCTION The numerical representation of non-parametric surfaces is a key part of many spatio-temporal simulations, e.g., in additive manufacturing <cit.>, geology <cit.>, and biology <cit.>. This has motivated research into geometric computing algorithms that can achieve high accuracy for representing non-parametric surfaces at low computational cost. Ideally, the algorithms should also be parallelizable in order to leverage high-performance-computing and GPU resources. Due to their geometric expressiveness and parallelizability, level-set methods <cit.> have emerged as a popular approach to geometric computing for non-parametric surfaces. They have been successfully applied to real-world problems involving complex geometries, such as the growth, division, and reconnection of tumors <cit.> and the diffusion of membrane proteins on the highly curved endoplasmic reticulum <cit.>. In level-set methods, a two-dimensional surface embedded in a three-dimensional space is described implicitly as the (usually zero) level-set of a scalar function over the embedding space. Often, the level-set function is chosen to be the signed distance function (SDF) to the surface. This guarantees that the level-set function is smooth and continuously differentiable near the surface <cit.>, and it simplifies surface computations as the level-set function value directly represents the shortest distance to the surface. Level-set methods also generalize well to dynamically moving and deforming surfaces, as changes in the shape of the surface amount to advecting the level-set function. When advecting the level-set function, however, the signed-distance property is in general not conserved. Indeed, for deformation velocity fields that do not describe rigid-body motion, the advection of the level-set function in the embedding space destroys the signed-distance property <cit.>. This can be avoided by correcting the advection velocity away from the surface, for example by variational penalties <cit.> or Lagrange multipliers <cit.>. These approaches, however, are often inaccurate or accumulate advection errors over time. The most popular approach, therefore, is to recompute the SDF whenever the surface has deformed. This is known as level-set redistancing, where the SDF in the embedding space is recomputed from the reconstructed current location of the surface. Several conceptually different methods for level-set redistancing are available: Sussman et al. <cit.> performed redistancing by evolving an auxiliary Partial Differential Equation (PDE) in pseudo-time, which has the SDF as a steady-state solution. Their approach achieves high accuracy on regular Cartesian grids and avoids numerical instabilities of the pseudo-time evolution by using higher-order ENO/WENO finite-difference schemes <cit.>. The resulting algorithm, however, is computationally expensive as it amounts to evolving a PDE to steady state in the embedding space with sometimes stringent time-step limitations <cit.>. The point of computational efficiency has been addressed by fast marching or sweeping methods <cit.>, which propagate the level-set values from the surface outward as a moving front. While this is computationally more efficient, it is limited to lower-order finite-difference schemes for which the resulting algebraic equations can be analytically solved, and it creates data dependencies that hamper parallelization <cit.>. A third approach therefore aims to directly compute the distance to the surface independently for all points in the embedding space by finding for each query point the closest point on the surface <cit.>. This closest-point (CP) transform has mainly found application in the numerical solution of surface PDEs <cit.> and has since been extended to level-set resdistancing using higher-order polynomial regression <cit.>. The resulting method achieves high orders of accuracy, is computationally efficient, and parallelizable. While originally formulated for regular Cartesian grids, it has recently also been demonstrated on unstructured grids <cit.> and triangulations <cit.>, confirming the versatility of the CP approach. Despite the efficiency and versatility of CP redistancing, however, algorithms that achieve high orders of convergence are so far limited to connected meshes and discretize level-set advection in an Eulerian frame of reference by evolving the level-set function values at the mesh nodes. In such an Eulerian mesh-based approach, fulfilling conservation laws becomes nontrivial, and the numerical stability and adaptivity of the overall framework is limited by the CFL condition. Both limitations can be relaxed when discretizing level-set advection in a Lagrangian frame of reference, where the discretization points move with the local advection velocity and preserve their level-set function values. Lagrangian level-set methods have been shown to be highly geometry-adaptive with excellent numerical stability <cit.> while maintaining the general conservation properties <cit.> of Lagrangian particle methods. In Lagrangian particle methods, however, particle distributions become increasingly irregular as particles move with the advection velocity. This hampers level-set redistancing, for which so far only first-order methods exist that aim to regularize the level-set function by renormalization <cit.>. Previous higher-order approaches exclusively operate on grids, which requires interpolation of the level-set function values from the Lagrangian particles to the grid nodes before redistancing <cit.>. This introduces additional computational cost and interpolation errors. While high-order particle-to-mesh interpolation schemes exist <cit.>, they are based on conservation laws for the moments of the represented function, which do not apply to level-set functions. While again renormalization approaches have been proposed to address the problem <cit.>, their convergence is limited to linear order. Here, we present a fully Lagrangian particle CP level-set redistancing method that achieves higher-order convergence without requiring interpolation to a structured or unstructured intermediate mesh. The method directly operates on Lagrangian particles, maintaining their conservation properties and stability, while simplifying level-set advection for dynamic surfaces by inheriting the accuracy, computational efficiency, and parallelizability of CP redistancing. In the present method, particles in a narrow-band around the surface carry and advect the level-set function values. After advection, the SDF is recomputed directly on the irregularly distributed particles by finding their respective closest points on the surface. We do this using high-order polynomial regression in a Newton-Lagrange basis on unisolvent nodes. The analytical form of the local regression polynomials enables straightforward computation of derivative geometric quantities, such as surface normals and local curvatures. The regression nodes are a suitably chosen subset of particles in the narrow-band around the surface. We show that a clever choice of local unisolvent nodes maintains high-order convergence even on highly irregular particle distributions in the narrow-band. We discuss accuracy, stability, and performance of the method on benchmark geometries with analytically known SDF, a vortex flow problem, and a multi-phase hydrodynamics application. In the latter, we compare the present approach with a Smoothed Particle Hydrodynamics (SPH) method <cit.>. § LEVEL-SET METHOD Level-set methods describe an evolving surface Γ_t implicitly as the zero level-set of a scalar function ϕ: Γ_t={𝐱 : ϕ(𝐱,t)=0}, with 𝐱∈ℝ^n_d being the coordinates in n_d-dimensional space, and t being the time. Due to favourable properties in accuracy and volume conservation, as well as simplified computations of surface-geometric quantities such as normals and curvatures, a popular choice for the level-set function is a signed distance towards the surface: ϕ(𝐱)=±𝐜𝐩(𝐱)-𝐱_2, in which 𝐜𝐩(𝐱) is the closest-point function that yields the closest point of a given location on the surface, measured in L_2-distance. The sign of the level-set function (<ref>) is chosen as positive if outside, and negative if inside of a closed surface. A natural property of the SDF (<ref>) is that its gradient has unit length, ∇ϕ(𝐱)_2 = 1. Level-set functions also allow for a computation of derivative fields associated with surfaces, such as the surface normal 𝐧=∇ϕ(𝐱)/∇ϕ(𝐱)_2 and the local mean curvature κ=∇·𝐧 if n_d=2, and κ=1/2∇·𝐧 if n_d=3. The surface Γ_t can move and deform with velocity 𝐮(𝐱, t) over time t. After any movement, material points lying on the surface remain on the surface: Dϕ/Dt=0. Eq. (<ref>) is formulated using the material derivative D(·)/Dt=∂(·)/∂ t + 𝐮·∇(·) and generally only holds for material points on the surface, i.e. on the zero level-set. Material points surrounding the surface can either approach to or recede from the surface, which is not accounted for by Eq. (<ref>), as the velocity field 𝐮 is embedded in the 3D space. If changes in distance towards the surface are not taken into account, the level-set function ceases to be the SDF and the property (<ref>) is lost, hampering the computation of surface normals and local curvatures. Therefore, level-set redistancing is used to restore the SDF property. § REDISTANCING ON IRREGULAR PARTICLE DISTRIBUTIONS We represent the surface Γ_t using a finite set of n_p discrete Lagrangian particles P_i, i∈{1,… ,n_p}. These particles store and advect the level-set function values ϕ_i in addition to their position. At the beginning of a simulation, the particles are usually seeded uniformly or uniformly at random with inter-particle spacing h in a tubular neighborhood of diameter w around the surface, which we call narrow-band in accordance with the usual level-set terminology. Then, the particles move with the deformation velocity 𝐮 of the surface, and the particle distribution can become arbitrary. This Lagrangian formulation offers a number of benefits: rigid-body movements of the surface are simulated exactly <cit.>, aside from time integration errors, and the method has better time-integration stability as the Lagrangian CFL number is larger than the Eulerian CFL number <cit.>, and no grid data structure needs to be allocated and maintained. We extend the CP redistancing proposed by Saye <cit.> to irregularly distributed particles in a narrow-band of width w around the surface, as illustrated in Fig. <ref>. The set containing all particles of the narrow-band is 𝒩 = {P_i : |ϕ(𝐱_i)| < w/2}. Before redistancing, an additional set of sample particles 𝒮 that lie precisely on the surface is created. For this, particles immediately adjacent to the surface are used as starting points (set 𝒞 in Fig. <ref>). Using local polynomial regression over neighborhoods of such near-surface particles, the level-set function ϕ (𝐱) is locally approximated. The polynomial approximation is then used to project the location of the particle onto the zero level-set, yielding the corresponding sample particle. For redistancing, the following steps are then computed for all particles in 𝒩: First, we find the closest sample particle using a cell-list acceleration data structure <cit.>, which serves as an initial guess for solving a constrained optimization problem. This problem minimizes the distance between the query particle and another point under the constraint that the polynomially approximated level-set function at the location of the other point is zero. This is optimized over the same local regression polynomial used to generate the sample particles and yields the closest point on the zero level-set of the local regression polynomial. The distance between the so-computed closest point and the query particle is the corrected level-set value that restores the SDF. This procedure crucially hinges on the numerical properties of the polynomial regression scheme used. In the spirit of particle methods, we perform polynomial regression in a local radial neighborhood of a particle. The radius r_c of this neighborhood needs to include at least as many particles as are required to determine the coefficients of the regression polynomial for a given polynomial degree. Further, the spatial arrangement of the particles within the regression neighborhood cannot be dependent in a way that renders the Vandermonde matrix of the regression problem singular. In the next section, we therefore pay particular attention to how polynomial regression is done here. §.§ Local polynomial approximation of the level-set function The sample particles provide a coarse estimate of the surface and are obtained by first finding a set 𝒞⊂𝒩 containing particles in the narrow-band that are close to the surface. A particle is in 𝒞 if it has another particle with opposite sign of the level-set function value within a certain radius ξ. This radius is a parameter of the method. Larger thresholds create more sample particles, thus improving the sampling of the surface, but causing higher computational cost. In this paper, we use a threshold radius of ξ=1.5h throughout. Then, 𝒞 is a small subset of 𝒩. For each particle P_i∈𝒞, we approximate the level-set function as a continuous polynomial obtained by local least-squares regression. As we show in Sec. <ref>, this is simpler and more accurate than directly using a particle-function approximation of the level-set function. Using n_c monomials M_k(𝐱), the local approximation of the level-set function reads: ϕ(𝐱)≈ p_i(𝐱)=∑_k=0^n_cc_k M_k(𝐱) . The choice of the number and type of monomials can be made depending on the requirements of the application. We determine the coefficients c_k of p_i(𝐱) by iterating over the neighborhood of a particle P_i∈𝒞 that contains n_n particles including P_i itself. In this neighborhood, we assemble the regression matrix 𝐀∈ℝ^n_n× n_c, the unknown coefficient vector 𝐜∈ℝ^n_c, and the right-hand side vector containing the level-set function values ϕ∈ℝ^n_n. Then, the linear system of equations 𝐀𝐜=ϕ is solved using orthogonal decomposition. If n_n=n_c and 𝐀 has full rank, the polynomial p_i(𝐱) is the interpolation polynomial. If n_n>n_c, p_i(𝐱) is the least-squares solution for the polynomial regression problem. The condition number of the regression matrix depends on the distribution of particles and the choice of basis M_k(𝐱). We do not have control on the former, but we can choose the polynomial basis M_k(𝐱). As a basis, we use the Lagrange polynomials L_k(𝐱) determined on a Chebyshev-Lobatto grid G⊆ [-1,1]^n_d in the regression neighborhood, i.e., L_k(𝐪_l) = δ_k,l, 𝐪_l ∈ G, where δ_k,l is the Kronecker delta <cit.>. Therefore, the points at which the polynomial basis is computed (i.e., the nodes of a local Chebyshev-Lobatto grid) in general differ from the points p_i(𝐱) at which the regression polynomial is evaluated. We use basis polynomials of total degree (ℓ_1-degree) <cit.>. The regression matrix is constructed by evaluating the Newton form of the Lagrange basis polynomials on the regression nodes. This choice of basis has been shown to effectively regularize regression over randomly distributed points for a large class of analytic functions <cit.>. We refer to this regression approach as minter regression. Since Eq. (<ref>) holds on the surface after any movement, solving Eq. (<ref>) for each particle in 𝒞 yields a polynomial representation of the surface as the zero level-set of the local regression polynomials p_i(𝐱). Using this representation in the proximity of the “center” particle P_i∈𝒞, the locations of the sample particles 𝒮 can be determined. This is done by iterative projection onto the zero level-set of p_i(𝐱), using P_i∈𝒞 as a starting point and iterating 𝐱^k+1 = 𝐱^k-p_i(𝐱^k)∇ p_i(𝐱^k)/∇ p_i(𝐱^k)_2^2, k=0,1,2,… The iteration is stopped as soon as |p_i(𝐱^k)|<ε for a user-defined tolerance ε. Doing so for all particles in 𝒞 completes the sample particle set 𝒮, as illustrated in Fig. <ref>. The sample particles provide starting points for the subsequent search for the closest point of any query point (also between particles). They further act to store the regression polynomials and therefore the local geometry of the surface. The resolution with which 𝒮 samples the surface results from the particle distribution around the surface: Each particle in 𝒞, from both sides of the surface, creates one sample particle. For example, a straight 1D surface of length l embedded in 2D space, in which the domain has been discretized using an inter-particle spacing of h, would contain 2l/h sample particles. §.§ Finding the closest point on the surface for a given particle From the sample particles 𝒮 and the associated polynomials, the distance of any given location towards the surface can be computed. The query point may also lie between particles or outside of the narrow-band. In level-set redistancing, however, typically the query points are the particles 𝒩 within the narrow-band. For each query particle P_q∈𝒩 at position 𝐱_q, we look for the closest sample particle P_s∈𝒮, located at position 𝐱_s. In a well-sampled surface, we know that the closest point of P_q lies in a neighborhood of its closest sample particle. Hence, the zero level-set of the regression polynomial p_s is used as a local approximation of the surface. To find the closest point 𝐱 of P_q on the approximated surface, we solve the constrained optimization problem _𝐱1/2𝐱-𝐱_q_2^2 , s.t. p_s(𝐱)=0 , minimizing the distance under the constraint that the solution lies on the zero level-set of the regression polynomial. We reformulate this problem using a Lagrange multiplier λ and the associated Lagrangian ℒ(𝐱,λ)=1/2𝐱-𝐱_q_2^2+λ p_s(𝐱) . Stationary points of the Lagrangian fulfill ∇_𝐱,λℒ=( 𝐱-𝐱_q+λ∇ p_s(𝐱) p_s(𝐱) )=0 and are found using the Newton method. The subscripts 𝐱 and λ indicate differential operators with respect to both variables. As an initial guess 𝐱^0 for the iterative Newton method, we use the location of the closest sample particle: 𝐱^0=𝐱_s. The initial Lagrange multiplier is λ^0=(𝐱_q-𝐱^0)·∇ p_s(𝐱^0)/∇ p_s(𝐱^0)_2^2. Subsequently, we iterate (𝐱^k+1 λ^k+1)=(𝐱^k λ^k )-(H_𝐱,λℒ(𝐱^k,λ^k))^-1∇_𝐱,λℒ(𝐱^k,λ^k), where H is the Hessian, computed as H_𝐱,λℒ(𝐱,λ)=( I+λH p_s(𝐱) ∇ p_s(𝐱) ∇ p_s(𝐱)^⊤ 0 ). We perform the iterations in Eq. (<ref>) until the L_2-norm of the gradient in Eq. (<ref>) falls below the tolerance ε, or a maximum number of iterations k_max is reached. If the Newton iterations stray out of the support neighborhood of the sample point P_s, this can be detected and a new iteration can be started from the corresponding neighboring sample point. In all benchmarks presented in this paper, however, we did not encounter such a case. Following the definition in Eq. (<ref>), we use the resulting approximation of the closest point 𝐱≈𝐜𝐩(𝐱_q) to update the level-set function value of the query particle P_q: ϕ_q=sgn(ϕ_q)𝐱-𝐱_q_2 . Here, sgn denotes the sign function that ensures that the sign of the new SDF value is the same as the sign of the old level-set function value. §.§ Derivative surface quantities From the closest point of a given query particle and the polynomial approximation of the level-set function in the vicinity of the closest point, it is straightforward to compute derivative surface quantities, such as the surface normal and the local mean curvature at the closest point. We compute the surface normal at the closest point of a query particle P_q as 𝐧_q = ∇ p_s(𝐜𝐩(𝐱_q))/∇ p_s(𝐜𝐩(𝐱_q))_2 , and the local mean curvature as κ_q=∇·𝐧_q in 2D simulation embedding domains and κ_q=1/2∇·𝐧_q in 3D. Note that as during the previously outlined redistancing method, the gradient and Hessian of the polynomial are known analytically. Evaluating the derivative at the closest point on the surface also yields a constant normal extension of surface normals and curvature values into the narrow-band. §.§ Narrow-band updates During level-set advection, particles may enter or exit the narrow-band of width w. We assign particles to the narrow-band set 𝒩 or its complement 𝒩^c depending on their level-set function value: P_i∈𝒩, if |ϕ_i|≤w/2 𝒩^c, if |ϕ_i|>w/2 . Keeping track of particles leaving the narrow-band is trivial, since a particle in 𝒩 moving outside the narrow-band will be redistanced with a SDF value >w/2 and is removed from the set 𝒩. New particles entering the narrow-band, however, cannot be detected based on their level-set function value, as they have not yet been redistanced. We therefore periodically redistance with an enlarged band width, which we refer to as skin in analogy to Verlet lists <cit.>. The thickness of the skin and the frequency with which it is used both depend on the dynamics of the simulation. At the beginning of a simulation, the skin should cover all particles that could possibly enter the narrow-band during the course of the entire simulation. The frequency of skin redistancing should be chosen such that a particle entering the narrow-band is detected as a narrow-band particle before its geometric information is required in the regression vicinity of the surface. To determine the frequency of skin redistancing, we consider the minimum time a particle requires to travel w/2 - (r_c+ξ), since r_c+ξ is the maximum distance from the surface below which a particle can contribute to the local regression polynomials, as highlighted in Fig. <ref>. With the maximum advection velocity 𝐮_max anticipated in a simulation, this requires a time of w/2-(r_c+ξ)/𝐮_max_2. Hence, the frequency of skin redistancing is f_redist, skin≥𝐮_max_2/w/2-(r_c+ξ) to ensure detecting any particle relevant to surface dynamics. § RESULTS We implement the above Particle Closest-Point (PCP) method in the scalable scientific computing framework OpenFPM <cit.> using the minter package <cit.> and the Eigen library <cit.> for regression. We characterize the accuracy and efficiency of the method in benchmarks ranging from basic geometries with known analytical SDF over standard dynamic-surface test cases to a multi-phase flow problem with interfacial effects. §.§ Basic geometries In his work on mesh-based closest-point redistancing, Saye <cit.> presents results for elementary geometries such as 2D ellipses and 3D ellipsoids. To demonstrate the utility of a method tailored to data on irregularly distributed particles, we first highlight the problems of the apparent alternative of interpolating particle data to mesh nodes and subsequently applying a mesh-based method. Then, we compare it with the proposed PCP method directly on the particles. Irregular particle distributions for the benchmarks are obtained by randomly shifting the nodes 𝐦_ij of a regular Cartesian mesh with spacing h: 𝐱_p = 𝐦_ij + 𝐗 , with random shifts X_d = α hμ , for d={1, … , n_d}. The pseudo-random variables μ∼𝒰[-1, 1] are i.i.d. from the uniform distribution over the interval [-1,1]. The shift amplitude α is always chosen <0.5 to ensure that no two particles coincide. §.§.§ Remeshing followed by closest-point redistancing As a baseline, we first characterize the classic approach of interpolating the particle values to a grid, followed by CP redistancing <cit.>. For this, we consider an ellipse in the 2D domain [-1,1]×[-1,1] discretized by particles with spacing h and shift amplitude α=0.3 covering the entire domain with periodic boundary conditions. The level-set values at the particles are initialized to ϕ(x,y)=1-√(x^2/A^2+y^2/B^2) , for an ellipse with semi-major axis A=0.75 and semi-minor axis B=0.5. The zero level-set of Eq. (<ref>) coincides with the zero level-set of the SDF of the ellipse. Away from the surface, however, this is not a SDF. We compute a SDF approximation by remeshing followed by mesh-based CP redistancing according to Saye <cit.>. We compare a variety of remeshing schemes as described in Appendix <ref>. We compute mesh-node values using the Λ_4,4 kernel in both the basic formulation (<ref>) and the renormalized formulation (<ref>), as well as using the renormalized Wendland C2 and Gaussian kernel functions (<ref>). We compare the remeshing results with the analytically known exact values ϕ_exact obtained by evaluating Eq. (<ref>) at the mesh nodes. The absolute remeshing error on each mesh node is then computed as e(ϕ(𝐦_ij))=|ϕ(𝐦_ij)-ϕ_exact(𝐦_ij)|. To characterize the convergence behavior, we increase the number of particles by decreasing the spacing h. For the Wendland and Gaussian kernels, we simultaneously increase the fraction ϵ/h according to Table <ref> as required <cit.>. The maximum error over all mesh nodes in a narrow-band of radius w/2=1/16 around the zero level-set is plotted in Fig. <ref>. As expected, the Λ_4,4 kernel in the basic formulation (Eq (<ref>)) does not converge with its theoretical order of four, as it is derived from moment conservation laws that do not hold for level-set functions. In fact, it diverges slowly, which is expected since the maximum error occurs in the most irregular particle neighborhood with an irregularity proportional to the total number of randomly perturbed particles in the domain. Convergence is restored with order 0.81 (still far from the theoretical 4) when renormalizing function values by the amount of contributions they experienced. If in addition to renormalizing function values, the contributions from individual particles are weighted by respective particle volumes, and the smoothing lengths of the kernel functions are increased sufficiently, convergence of order 1.3 is achieved for the Wendland C2 and Gaussian kernels. For these two, however, there is a noticeable effect from the irregular particle distribution, as they are expected to converge with order two for particles distributed on a regular Cartesian grid. We next test if CP redistancing applied to the interpolated mesh values can restore an overall higher order of convergence. We therefore plot the maximum error in the approximated SDF over the same narrow-band in Fig. <ref>. The reference SDF values were obtained using the method outlined in Ref. <cit.>. CP redistancing is done using fourth-order polynomials with a monomial basis of total degree (referred to as “Taylor 4” in Ref. <cit.>), such that we expect fifth-order convergence overall <cit.>. Due to the intermediate remeshing step, however, the errors converge slower than if redistancing was based on error-free mesh data. When remeshing with the standard or renormalized Λ_4,4 method, the overall convergence is of order 0.95 and 0.96, respectively. When remeshing with the Wendland C2 and Gaussian kernels, overall convergence in the SDF error is of order 1.3. In all cases where the remeshing itself converges, the overall redistancing accuracy is therefore limited by the particle-mesh interpolation error. An interesting exception is the Λ_4,4 kernel without renormalization, which does not converge by itself but convergence is restored by CP redistancing. This is because CP redistancing uses a fixed number of grid layers around the surface, rather than a fixed narrow-band width. If the absolute remeshing error is evaluated in a fixed number of grid layers, rather than a fixed-size narrow-band, convergence of order 0.9 is also seen for the remeshing alone. This is because the level-set function has smaller absolute values closer to the surface. The relative error per mesh node nevertheless remains constant. In summary, first interpolating from Lagrangian particles to a regular Cartesian mesh prevents mesh-based CP redistancing from reaching its design order of convergence as particle-mesh interpolation limits the overall accuracy. §.§.§ Particle closest-point redistancing We compare the above results with the present PCP redistancing method for the same ellipse example with the same irregular particle distribution (α=0.3). We use a fourth-order minter basis and a cutoff radius of r_c=2.5h. We set the tolerance ε=10^-14 and the maximum number of iterations k_max=1000. For the simple ellipse geometry, both the iterative projections and the Newton algorithm converge quickly, i.e. in 2 to 4 iterations, such that k_max has no effect on the results. Fig. <ref> shows the convergence in the SDF. In absence of any grid, the errors are now evaluated directly on the query particles P_q in the entire narrow-band. Even for coarse resolutions the error is orders of magnitude smaller than any accuracy achieved by remeshed CP redistancing. The theoretical fifth-order convergence is approximately reached, with a measured order of 4.8, until the error plateaus near the set tolerance ε=10^-14. We next assess the convergence of the PCP method for computing derivative surface quantities in a 3D narrow-band around an ellipsoid with semi-major axis A=0.75 and both semi-minor axes B=C=0.5. We discretize the level-set function on irregularly distributed particles (α=0.3) within a narrow-band of width w=12h and use a tolerance of ε=10^-14. The maximum error is reported over the entire narrow-band in Fig. <ref> for fourth- and fifth-order minter regression. For the fourth-order polynomials a cutoff radius of r_c=2.4h is used, for the fifth-order polynomials r_c=2.6h. The theoretical convergence orders are almost achieved when using fourth-order minter regression: the SDF converges with order 4.9, the closest-point transform 𝐜𝐩(𝐱_p) with 4.8, the surface normals 𝐧(𝐱_p) with 3.8, and the local curvature κ(𝐱_p) with 3.0 until they plateau at the highest resolution. The SDF and closest-point function converge with the same order and are separated by a constant offset. Since the normal field and local mean curvatures are computed as the derivatives of the SDF, they are third- and second-order polynomials, respectively, with theoretical convergence orders of 4 and 3. Using fifth-order polynomials increases both accuracy and convergence order (to 5.9) especially for the lower resolutions between h=1/32 until h=1/128 where round-off errors do not dominate. The convergence order of the other quantities computed with fifth-order PCP are also close to optimal (not shown in the figure to avoid clutter): 6.4 for 𝐜𝐩(𝐱_p), and 5.2 for 𝐧(𝐱_p), and 4.0 for κ(𝐱_p). To test the robustness of the PCP method to distortions in the particle distribution, we vary the shift amplitude α and compare the results computed using a minter basis with baseline results obtained using a monomial basis. For both, we use fourth-order polynomials of total degree and a cutoff radius of r_c=2.4h. We consider the same ellipsoid as before with resolution fixed to h=1/256. Fig. <ref> shows the maximum error of the SDF within the narrow-band for α from 0 (i.e., regular Cartesian grid) to 0.49. As expected, the errors grow for increasing irregularity of the particle distribution. Comparing the minter basis with the monomial basis, the errors start to differ at α=0.25. Beyond this point, error on the monomial basis grows by two orders of magnitude as the regression problem becomes ill-conditioned. This demonstrates that indeed the minter basis and Chebyshev-Lobatto subgrids regularize the regression problem sufficiently to allow convergent redistancing on highly distorted particle distributions. Finally, we confirm that the PCP method behaves as expected when applied to non-smooth surfaces. For this, we perform PCP redistancing for a rounded rectangle (C^1) and a square (C^0). On irregular particle distributions (α=0.3), the SDF computed using the PCP method converges with order 2 for the rounded rectangle and order 1 for the square. These are the same convergence orders as for the mesh-based CP method <cit.>, agnostic towards the chosen regression basis and order (>0), since a smooth polynomial cannot describe the jump in the curvature (rounded rectangle) or the normals (square). Hence, the leading error term is always limited by the smoothness of the shape itself. §.§.§ Parallel scaling We test how well our OpenFPM implementation of PCP redistancing scales to multiple processor (CPU) cores. For this, we again use the 3D ellipsoid test case with fixed h=1/512, w=12h, and ε=10^-14 with a fourth-order minter basis and r_c=2.4h. We measure the wall-clock time t_cpu(n_cpu) for different numbers of processor cores n_cpu on an AMD Ryzen Threadripper 3990X CPU with 64 cores and 256 GB of shared memory. We report the parallel speedup S(n_cpu)=t_cpu(1)/t_cpu(n_cpu) for this strong scaling (i.e., fixed problem size) in Fig. <ref>, showing near-optimal scalability. For n_cpu=32, the parallel efficiency E(n_cpu)=S(n_cpu)/n_cpu is about 80%. The 20% communication overhead are consistent with the volume of the ghost layers required to localize the closest sample particle. Another likely reason is an uneven distribution of load amongst the processors. While we ensured that all processors have the same amount of particles, this does not necessarily result in the same surface area, and therefore |𝒞|, per processor. Hence there is a possible load imbalance during local polynomial regression and surface sampling. §.§ Spiraling vortex Moving beyond basic geometric shapes, and into dynamically deforming surfaces, we next consider a classic level-set benchmark case in which a circle is stretched and spiraled by an advection velocity field 𝐮(𝐱). The spiraling vortex reaches a state of maximum distortion, and subsequently the velocity field is reversed. At the end, an ideal method would recover the initial circle. In practice, however, numerical errors in the advection of the level-set function and the periodic redistancing accumulate in a final geometry that differs from the initial circle. This difference can be used to compare methods. For Lagrangian particle methods without remeshing, this test case also introduces another challenge: the particle distribution becomes increasingly distorted and inhomogeneous. The PCP redistancing method is thus confronted with excess information along some directions, and a lack of information along others. This makes an ideal test case to assess the robustness of the PCP method on ill-conditioned particle distributions. The initial circle of radius R=0.15 centered at (0.5,0.75) in the 2D square domain [0,1]×[0,1] is discretized with a narrow-band (w=40h) of particles. The particles then move with the advection velocity 𝐮(𝐱,t)=2cos(π t8)( -sin^2(π x)sin(π y)cos(π y) sin^2(π y)sin(π x)cos(π x) ). Time integration uses the explicit fourth-order Runge-Kutta scheme with time-step size Δ t=1/30. The velocity field is reversed at t=4.0, such that the simulation ends at t_end=8.0. At t=0, the level-set function values on all particles are initialized to the exact analytical distance to the circle. PCP redistancing with fourth-order minter regression is done after each time step of the advection with ε=10^-10, r_c=15h, and k_max=100. The reduced maximum number of iterations is required in this case, as the solver is unable to reach the desired tolerance for some of the most distorted particle distributions. The large cutoff radius reduces the accuracy in the early and late time steps of this example, yet it is beneficial during steps involving extremely ill-conditioned particle distributions, where the particle density along one direction differs significantly from the density in the other direction. The flow field causing particles to align poses severe limitations to polynomial regression approaches and also causes the approximated zero level-set to lose its smoothness during the simulation. The most ill-conditioned particle distribution (at t=4.0) is visualized in Fig. <ref>. The zero level-set of the SDF is shown as an isocontour as determined by Paraview <cit.>. As can be seen in Fig. <ref>, the particle distribution is particularly sparse in the direction orthogonal to the surface. This is challenging for any redistancing scheme, since the SDF solely depends on information in the orthogonal direction. Another challenging aspect of this particle distribution is that multiple zero level-sets are present in each r_c-neighborhood, as shown exemplary for one particle in the closeup in Fig. <ref>, dotted circle. Running the simulation until the final time t_end=8.0, we can observe how well the PCP method with redistancing at each time step is able to recover the initial circle. Fig. <ref> visualizes the final isocontours reconstructed by Paraview <cit.> for three different resolutions. As expected, simulations with fewer particles suffer more. For h=1/64, the final level-set resembles a circle on average, but has noticeable errors. These errors do, however, converge with increasing numbers of particles, and the final shape for h=1/2048 is visually indistinguishable from the original circle. This can also be seen in the convergence of the average and maximum errors in the SDF and the enclosed area A (e(A)=|(A-A_exact)|/A_exact) of the level-set as reported in Table <ref>. The average error in the SDF converges with order 1.9, the maximum error converges with order 1.5, and the area converges with order 1.8. This confirms that the PCP method converges (albeit with sub-optimal order) despite the severely ill-posed particle distributions occurring during the spiraling vortex dynamics. §.§ Multi-phase hydrodynamics As a final test case, we consider the more “real-world” example of droplet dynamics in multi-phase hydrodynamics in both 2D and 3D. In this test case, a closed surface is coupled to two surrounding fluid phases, one inside and one outside. The curved interface between the two fluids has a surface tension and thus exerts forces on the fluids that depend on its curvature and cause dynamic flows, which in turn again advect and shape the interface. Both fluids are modeled using the incompressible Navier-Stokes equations Dρ/Dt=-ρ∇·𝐮 , D𝐮/Dt=-1/ρ∇ p+1/ρηΔ𝐮 + 𝐅^(s) , where density is denoted by ρ, pressure by p, η is the dynamic viscosity, and Δ is the Laplace operator. The surface tension effect is modeled through a continuum surface force <cit.>. Here, a volumetric surface force is active in the smoothed out interface, or transition region, which can be described with a smooth surface delta function δ. The surface force then reads 𝐅^(s)=-τ/ρκ𝐧δ and acts in the normal direction of the surface and is proportional to both the surface tension τ and the local mean curvature κ. The continuity Eq. (<ref>) and momentum Eq. (<ref>) are complemented by the Cole equation of state <cit.> p(ρ)=c^2ρ_0/γ((ρ/ρ_0)^γ-1), linking the pressure p to the density ρ via a reference density ρ_0, a speed of sound c and the polytropic index of the fluid γ. We choose the speed of sound c=100 at least one order of magnitude larger than the maximum flow velocity 𝐮_max_2 to ensure Mach numbers Ma:=𝐮_max_2/c≲ 0.3 remain in the region of incompressible flow. We further set γ=7 and consider two identical fluids with ρ_0=1.0 and η=0.5, separated by an interface with τ=50.0. We solve the Navier-Stokes equations inside and outside the droplet without the interfacial forces 𝐅^(s) using a weakly compressible Smoothed Particle Hydrodynamics (SPH) approach <cit.>. Further details of the numerical method and the test case are given in Appendix <ref>. The interfacial forces 𝐅^(s) in Eq. (<ref>) are computed in two ways for comparison: (1) using the present PCP method to compute the SDF, the surface normals, and the mean curvatures, resulting in Eq. (<ref>), and (2) using a colorfield function and SPH operators <cit.>, resulting in Eq. (<ref>). For the quantitative analysis we first consider a 2D model. Here, we perform simulations for three different resolutions, h=1/32, 1/64, and 1/128 with a smoothing factor of ϵ/h=3. The time steps for the three different resolutions are Δ t=2e-4, Δ t=1e-4, and Δ t=5e-5. Initially, at time t=0, both fluid phases are at rest (zero velocity everywhere) and the interface is an ellipse (Eq. (<ref>)) with A=0.75 and B=0.5 embedded in the 2D domain (-1.22,1.22)×(-1.22,1.22) with periodic boundary conditions in both directions. Interface regions of higher curvature exert greater force than those of lower curvature, causing oscillating deformation of the incompressible droplet, eventually subsiding with the shape of minimal surface area, a circle. For the 3D case, this is visualized in the Supplementary Video of the web version of this article. We run the simulations until t_end=1. Fig. <ref> visualizes the particle distribution, the computed mean curvature values, and the zero level-set reconstructed from the advected level-set values at t≈0.15, when the droplet assumes its maximum vertical elongation. Table <ref> compares the computational times of the entire 2D simulation using the present PCP method and using colorfield-SPH for different resolutions h with the same Δ t=5e-5. Computing geometric quantities involves more computations per particle in the PCP method than in the colorfield-SPH approach, which is why the computational time is not smaller for PCP at low spatial resolution, even though the computations are confined to the narrow band. However, as the fraction of particles that are inside the narrow band decreases with decreasing average inter-particle spacing h, the computational cost of PCP reduces for finer resolutions compared to SPH. Fig. <ref> shows the narrow band of width w=15h, to which geometric computations are confined, for h=1/32. A key advantage of a closest-point approach can be seen in Figs. <ref> and <ref> (<ref>): geometric quantities are computed at the closest point, i.e. on the surface itself, and then extended along the surface normal to particles in the embedding space. For finite h, this differs from the the colorfield-SPH approach, which computes geometric quantities of the isocontours of the level-set function on which the particles lie. Hence, it is unsurprising that in Fig. <ref> in <ref> the standard deviation of the computed mean curvature values is smaller for the PCP method. The minimum standard deviation of the mean curvature values occurs when the interface becomes a circle. For the colorfield-SPH approach in the highest resolution, the smallest achieved standard deviation is 0.105, while the values range from 8e-4 to 3e-4 for the different resolutions in the PCP approach. This is orders of magnitude better than the colorfield approach and generally satisfying, given that the results are limited by the time resolution of the simulations, and the exact moment the geometry becomes circular may not be captured. A further limiting factor of the results can be identified in the accuracy of the SPH approximation of the remaining terms in the Navier-Stokes equations for the surrounding fluids. Looking at the average mean curvature across particles in the interface region (Fig. <ref>, <ref>) we notice that the PCP method is slightly biased towards higher curvatures than the exact value an ideally incompressible fluid would yield. The volume reconstructed from the computed SDF is consistent with the mean curvature. For h=1/128, the volume reconstructed from the SDF computed by PCP has a relative error of -0.5% at the end of the simulation after a total of 40,000 redistancing steps (computing the volume using colorfield SPH is not straightforward). Given that the results from the colorfield-SPH approach have a similar bias for lower resolutions, but are less accurate, the bias is probably due to the simulated fluid not being ideally incompressible in the weakly compressible SPH approach. In summary, PCP not only computes curvature values at the correct iso-contour, but the values are also less noisy than those from colorfield-SPH. This is because it does not use a binary indicator function which heavily depends on the particle distribution in capturing the geometry. Finally, we also test the PCP multi-phase SPH fluid simulation for a 3D oscillating droplet with an initial shape of an ellipsoid with semi-major axis A=0.75 and both semi-minor axes B=C=0.5. We use the same parameters as in the 2D case with h=1/32. To lessen the computational burden stemming from the SPH operators, we reduce the smoothing factor to ϵ/h=2, and to alleviate the effort from geometric computing, we reduce the narrow-band width to w=10H. Fig. <ref> visualizes the particles of the inner fluid phase with their computed mean curvature values during the maximum vertical elongation of the droplet at t≈0.19. We also compute results for a lower viscosity value of η=0.3, visualized in the Supplementary Video of the web version of this article. Similarly as in the 2D simulations, the PCP method computes smooth mean curvature values of the dynamically deforming surface in 3D. Also in 3D, the PCP results are more accurate and computationally less expensive than the colorfield-SPH approach. This suggests that the PCP method is well suited for simulations involving dynamically deforming shapes. § DISCUSSION AND OUTLOOK We presented a higher-order redistancing scheme for fully Lagrangian particle level-set methods, extending closest-point redistancing <cit.> to irregularly distributed points. Unlike previous particle level-set methods <cit.>, the proposed approach does not require any form of interpolation from particles to mesh nodes, which we have shown to limit convergence for non-conserved level-set functions. The presented Particle Closest-Point (PCP) method relies on minter regression on Chebyshev-Lobatto subgrids to achieve numerical robustness. We have shown that this renders the method more robust to distortion in the particle distribution than regression using monomial bases. In the PCP method, the particles act as sample points of the level-set function, which is in contrast to colorfield approaches where particles directly represent the (smoothed) presence of a certain phase <cit.>. Our approach is purely geometric and allows for arbitrarily placed query points to be redistanced. Hence, the approach is readily applicable to the initialization of geometric quantities of new particles, be it in multi-resolution methods <cit.> or as a part of remeshing. We showed that the PCP method provides high-order convergent geometric quantities for basic geometries without requiring any connected mesh. We tested the robustness of the approach by studying highly irregular particle distributions and found that the high-order polynomial regression with a Lagrange basis on unisolvent nodes outperforms monomial regression approaches for irregular particle distributions, and thus is well-suited for a Lagrangian particle level-set method. This was also reflected during the application of the PCP method to ill-conditioned particle distributions in the spiraling vortex case, in which convergence was maintained. Finally, a more complex problem was studied in which an oscillating droplet was simulated in conjunction with SPH multi-phase flow operators. Overall, we found that the PCP method copes well with irregular particle distributions, yet we still expect it to require a certain homogeneity in the distribution as it benefited from larger smoothing lengths in the SPH operators. We also observed that some particles approached the zero contour and remained there. To avoid volume loss, we prevented particles from taking on a true 0 as a level-set value and from changing the sign of their level-set values <cit.>. This clearly is a limitation of the proposed approach and some form of particle distribution regularization will eventually be necessary. Another limitation is the non-uniform load distribution when implementing PCP methods on parallel computers. This is because the computational cost on a given processor not only depends on the number of particles it handles (which can be evenly distributed), but also on the local geometry of the surface (which is impossible to predict in a dynamic simulation), through the local regression problems. Nevertheless, the narrow-band character of the PCP approach confines the computational effort of the geometric computing framework to the proximity of the surface. We found that this leads to an overall faster simulation time than a naive colorfield-SPH approach in highly resolved multi-phase flows. As with most level-set methods, the proposed PCP approach is suitable for continuous, orientable, closed surfaces that do not self-intersect. This is the case for any closed surface that possesses a tubular neighborhood. For non-smooth surfaces, the convergence order of the method is bounded by the smoothness class of the surface and the degree of the regression polynomials used, whichever is smaller. If the smoothness class of the surface is smaller than the used polynomial degree, the leading error term is then determined by the derivative order in which discontinuities first appear. Future work could further optimize the numerical robustness of the method by testing different approaches to choosing particles that lie close to the unisolvent nodes of a Chebyshev-Lobatto subgrid. Further research could consider how Lagrangian formulations could incorporate interface velocity extensions <cit.> to reduce the required frequency of redistancing steps. Future applications of the PCP method can take advantage of the polynomial minter regression that is the defining feature of the method. This can, for example, include numerically solving PDEs on surfaces, where constant orthogonal extension is a popular solution approach <cit.>. There, the PCP method solves two of the main challenges: It accurately computes the CP transform using an orthogonal decomposition of the regression problem, which can straightforwardly be reused to also approximate the values of other fields at the computed closest point. Due to its computational efficiency, parallel scalability, and robustness against distortion in the particle distribution, the PCP method could therefore by key to solving PDEs on dynamically deforming surfaces. The C++ implementation of the presented particle closest point method in OpenFPM is publicly available from . § ACKNOWLEDGEMENTS We thank Dr. Michael Hecht (Center for Advanced Systems Understanding, Görlitz), Alejandra Foggia, Johannes Pahlke, and Justina Stark (all Sbalzarini group) for discussions and proofreading, and Dr. Pietro Incardona (University of Bonn) for his helpful support in the OpenFPM implementation of the presented method. This work was supported by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) in the joint project “6G-life” (ID 16KISK001K). § DECLARATION OF COMPETING INTEREST The authors have no conflict of interest to declare. § PARTICLE-MESH INTERPOLATION A popular kernel for particle-mesh interpolation is the Λ_4,4 kernel <cit.>, which conserves the first 4 moments and produces C^4 regular results: Λ_4,4(q)= 1-5/4q^2+1/4q^4-100/3q^5+455/4q^6-295/2q^7+345/4q^8-115/6q^9, if 0≤q<1, -199+5485/4q-32975/8q^2+28425/4q^3-61953/8q^4+33175/6q^5 -20685/8q^6+3055/4q^7-1035/8q^8+115/12q^9, if 1≤q<2, 5913-89235/4q+297585/8q^2-143895/4q^3+177871/8q^4-54641/6q^5 +19775/8q^6-1715/4q^7+345/8q^8-23/12q^9, if 2≤q<3, 0, else. It can be used to evaluate mesh node values as ϕ_ij = ∑_p ϕ_pΛ_4,4(|x_i-x_p|/h)Λ_4,4(|y_j-y_p|/h), in which i and j are the grid indices for x and y directions, respectively, and p is the index for the particles. This classic approach to particle-mesh interpolation is designed for intensive fields of conserved quantities, such as density or concentration fields <cit.>. As level-set functions generally do not obey any conservation laws, the different amounts of contributions individual mesh nodes receive need to be accounted for as soon as the particle distribution becomes irregular. This is done by renormalizing the interpolated quantities according to ϕ_ij= (∑_pΛ_4,4(|x_i-x_p|/h)Λ_4,4(|y_j-y_p|/h))^-1∑_p ϕ_pΛ_4,4(|x_i-x_p|/h)Λ_4,4(|y_j-y_p|/h). Similarly, the particle contributions can be scaled by individual volumes to yield the classic particle representation of an arbitrary field as f(𝐱)≈∑_pf_pW_ϵ(𝐱 - 𝐱_p_2)V_p , where W is a local, symmetric, normalized kernel function with a smoothing length of ϵ. The smoothing length determines how many particles contribute to a field evaluation, and it strongly influences the convergence properties of particle methods. Generally, as the inter-particle spacing h becomes smaller, the smoothing length ϵ should also become smaller, yet the ratio ϵ/h should grow to ensure convergence of the scheme <cit.>. In Eq. (<ref>), V_p is the volume associated with particle p, which can be computed as V_q=(∑_pW_ϵ(𝐱_q - 𝐱_p_2))^-1. The particle function approximation in Eq. (<ref>) of a level-set function can then also be renormalized and subsequently evaluated at mesh nodes ij: ϕ_ij=(∑_pW(𝐦_ij - 𝐱_p_2,ϵ)V_p)^-1∑_pϕ_pW(𝐦_ij - 𝐱_p_2,ϵ)V_p . In the main text, we compare this classic particle-mesh interpolation scheme with two different kernel functions popular in the SPH community. The first is the Wendland C2 kernel <cit.> W_ϵ(q)= σ_2D(1-q/2)^4(1+2q) if 0≤ q<2, 0 else, where the variable q is defined as: q=𝐱-𝐱_p_2/ϵ, and σ_2D=7/4πϵ^2 is a normalization factor ensuring that the kernel integrates to one. The second popular SPH kernel is the Gaussian W_ϵ(q)= σ_2De^-q^2 if 0≤ q≤3, 0 else, with q defined as for the Wendland kernel, and σ_2D=1/πϵ^2 the normalization constant. § SPH FORMULATION FOR MULTI-PHASE FLOW We discretize both continuum fluids with a set of in total n_p particles, initialized on a regular Cartesian grid of spacing h covering the entirety of the simulation domain. The masses m_i=M/n_p of the particles are computed by considering the total mass of the fluids M, resulting from the reference density and the occupied volume. To estimate the density of a single particle i, density summation is performed as ρ_i=m_i∑_jW_ij , where the Wendland C2 kernel from Eq. (<ref>) is used as W_ij=W_ϵ(𝐱_i-𝐱_j_2). Simultaneously, the volume of each particle V_i is computed according to Eq. (<ref>). Having computed the densities, the pressures are obtained by evaluating Eq. (<ref>) for ρ_i. At t_0=0, all velocities are set to 𝐮_i=0. The change in velocity per particle is determined by the discrete momentum equation as <cit.>: D𝐮_i/Dt= -1/m_i∑_j(V_i^2+V_j^2)ρ_iP_j+ρ_jP_i/ρ_i+ρ_j∇ W_ij+1/m_i∑_jη(V_i^2+V_j^2)𝐮_ij/r_ij∂ W/∂ r_ij + 𝐅_i^(s), with r_ij=𝐫_i-𝐫_j_2 and 𝐮_ij=𝐮_i-𝐮_j. Note that despite the absence of large reference density fractions, we include the smoothing effect of the inter-particle averaged pressure term. To determine the volumetric surface force acting on each particle, we consider two different approaches: (1) based on the present PCP method and (2), an approach based on a colorfield function and SPH operators. For estimating mean curvature and surface normals using PCP, we use a fourth-order minter basis and threshold parameters of ε=10^-12 and k_max=1000, a cutoff radius of r_c=2.8h, a sample-particle threshold ξ=1.5h, a narrow-band width of w=15h, and a skin width of 3w. To compute the interfacial surface tension, we require a smooth surface representation that distributes the surface tension effect on particles surrounding the interface. For this, we use the Wendland C2 kernel in 1D, W_ϵ1D(q)=5/8ϵ(1-q/2)^3(1.5q+1) if 0≤ q<2, 0 else, in which q=|ϕ|/ϵ. The smoothing length can in principle be chosen independently from the rest of the SPH operators, but we choose them to be identical for convenience. To detect particles relevant for the surface tension, we tune the skin redistancing frequency as described in the main text: For the three different resolutions considered, h=1/32, h=1/64, and h=1/128, we use f=100, f=200, and f=400, respectively. These frequencies do not only ensure that Eq. (<ref>) is fulfilled, but also that particles relevant to the continuum surface dynamics are kept track of. For this, the cutoff radius of the SPH operators, 2ϵ=6h, needs to be considered instead of the extent of the regression domain, r_c+ξ=2.8h+1.5h=4.3h. The resulting skin redistanding frequencies are then determined from the maximum velocity 𝐮_max_2=3 found in the simulation. With the level-set SDF, the surface normals, and mean curvatures, the discrete surface force on particle i is computed as 𝐅_i^(s)=-τ/ρ_iκ_i𝐧_iW_ϵ1D(ϕ_i) . In the main text, we compare this with the popular colorfield-SPH approach <cit.>. Colorfield-SPH identifies the transition region of the interface and computes normals and curvatures from a color function c assigning a unique color to each of the fluid phases. Following <cit.>, a smooth color value of a particle is obtained as a convolution, or particle function representation, of the binary indicator field, c_i=∑_jc_j^(ab)W_ijV_j, where c_j^(ab) is the binary phase indicator taking values of either 1 on one side of the interface or 0 on the other side, W_ij here is the Wendland C2 kernel, and the volumes V_j are computed according to Eq. (<ref>). The remainder of the colorfield-based geometric computing framework is as outlined in Ref. <cit.>, where non-unit surface normals 𝐧̂ are obtained as 𝐧̂_i=∑_j(c_j-c_i)∇ W_ij, whose magnitude is used as a indicator whether a particle is part of the smoothed interface or not, based on N_i= 1 if 𝐧̂_i_2>0.01/ϵ 0 else. Subsequently, narrow-banded unit normals can be obtained as 𝐧_i=𝐧̂_i/𝐧̂_i_2 if N_i=1, 0 else. Finally, mean curvature is approximated by the SPH divergence of the unit surface normals from all “interface particles”: κ_i=(∑_jmin(N_i,N_j)W_ijV_j)^-1∑_jmin(N_i,N_j)(𝐧_j-𝐧_i)·∇ W_ijV_j , where the pre-factor accounts for differing contributions due to particles qualifying as interface particles, or not. In Eq. (<ref>), the gradient of the smoothed color function, 𝐧̂_i, is interpreted as the product of a regularized delta function and the surface normal, yielding the discrete surface force on particle i as computed using colorfield-SPH: 𝐅_i^(s)=-τ/ρ_iκ_i𝐧̂_i . Regardless of the approach chosen to determine the interfacial forces, PCP or colorfield-SPH, we integrate the positions of the particles in time with a second-order predictor-corrector scheme as in Ref. <cit.> and apply the geometric computations at every predictor and every corrector step. We determine the time-step size Δ t such that it fulfills the CFL-like conditions given in Refs. <cit.>: Δ t ≤ min(0.25ϵ/c+𝐮_max_2, 0.125ρϵ^2/η, 0.25√(ρ_0ϵ^3/2πτ)), where we again use 𝐮_max_2=3. § APPENDIX FIGURES We provide additional figures as referred to and discussed in the main text. 10 adami2010new S. Adami, X. Hu, and N. A. Adams, A new surface-tension formulation for multi-phase SPH using a reproducing divergence approximation, Journal of Computational Physics, 229 (2010), pp. 5011–5021. paraview J. Ahrens, B. Geveci, and C. Law, Visualization Handbook, Elsevier Inc., Burlington, MA, USA, 2005, ch. ParaView: An End-User Tool for Large Data Visualization, pp. 717–731. bergdorf2006lagrangian M. Bergdorf and P. Koumoutsakos, A Lagrangian particle-wavelet method, Multiscale Modeling & Simulation, 5 (2006), pp. 980–995. bergdorf2010lagrangian M. Bergdorf, I. F. Sbalzarini, and P. Koumoutsakos, A Lagrangian particle method for reaction–diffusion systems on deforming surfaces, Journal of Mathematical Biology, 61 (2010), pp. 649–663. bergdorf2007multiresolution M. E. Bergdorf, Multiresolution particle methods for the simulation of growth and flow, PhD thesis, ETH Zurich, 2007. brackbill1992continuum J. U. Brackbill, D. B. Kothe, and C. Zemach, A continuum method for modeling surface tension, Journal of Computational Physics, 100 (1992), pp. 335–354. chen2017three Q. Chen, G. Guillemot, C.-A. Gandin, and M. Bellet, Three-dimensional finite element thermomechanical modeling of additive manufacturing by selective laser melting for ceramic materials, Additive Manufacturing, 16 (2017), pp. 124–137. cole1948underwater R. H. Cole and R. Weller, Underwater explosions, Physics Today, 1 (1948), p. 35. cottet2014high G.-H. Cottet, J.-M. Etancelin, F. Pérignon, and C. Picard, High order semi-Lagrangian particle methods for transport equations: numerical analysis and implementation issues, ESAIM: Mathematical Modelling and Numerical Analysis, 48 (2014), pp. 1029–1060. cottet2006level G.-H. Cottet and E. Maitre, A level set method for fluid-structure interactions with immersed surfaces, Mathematical Models and Methods in Applied Sciences, 16 (2006), pp. 415–438. eberly2011distance D. Eberly, Distance from a point to an ellipse, an ellipsoid, or a hyperellipsoid, Geometric Tools, LLC, (2011). engquist2005discretization B. Engquist, A.-K. Tornberg, and R. Tsai, Discretization of dirac delta functions in level set methods, Journal of Computational Physics, 207 (2005), pp. 28–51. enright2002use D. Enright, Use of the particle level set method for enhanced resolution of free surface flows, Stanford University, 2002. enright2002hybrid D. Enright, R. Fedkiw, J. Ferziger, and I. Mitchell, A hybrid particle level set method for improved interface capturing, Journal of Computational Physics, 183 (2002), pp. 83–116. estellers2012efficient V. Estellers, D. Zosso, R. Lai, S. Osher, J.-P. Thiran, and X. Bresson, Efficient algorithm for level set method preserving distance function, IEEE Transactions on Image Processing, 21 (2012), pp. 4722–4734. eigenweb G. Guennebaud, B. Jacob, et al., Eigen v3, 2010, <http://eigen.tuxfamily.org>. hecht2020multivariate M. Hecht, K. Gonciarz, J. Michelfeit, V. Sivkin, and I. F. Sbalzarini, Multivariate interpolation in unisolvent nodes–lifting the curse of dimensionality, arXiv preprint arXiv:2010.10824, (2020). henneaux2021high D. Henneaux, P. Schrooyen, L. Arbaoui, P. Chatelain, and T. E. Magin, A High-Order Level-Set Method Coupled with an Extended Discontinuous Galerkin Method for Simulating Moving Interface Problems, in AIAA AVIATION 2021 FORUM, 2021, p. 2742. hieber2005lagrangian S. E. Hieber and P. Koumoutsakos, A Lagrangian particle level set method, Journal of Computational Physics, 210 (2005), pp. 342–367. hieber2004remeshed S. E. Hieber, J. H. Walther, and P. Koumoutsakos, Remeshed smoothed particle hydrodynamics simulation of the mechanical behavior of human organs, Technology and Health Care, 12 (2004), pp. 305–314. incardona2019openfpm P. Incardona, A. Leo, Y. Zaluzhnyi, R. Ramaswamy, and I. F. Sbalzarini, OpenFPM: A scalable open framework for particle and particle-mesh codes on parallel computers, Computer Physics Communications, 241 (2019), pp. 155–177. kim2001AnON S. Kim, An O(N) level set method for Eikonal equations, SIAM Journal on Scientific Computing, 22 (2001), pp. 2178–2193. li2005level C. Li, C. Xu, C. Gui, and M. D. Fox, Level set evolution without re-initialization: a new variational formulation, in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), vol. 1, IEEE, 2005, pp. 430–436. li2010distance C. Li, C. Xu, C. Gui, and M. D. Fox, Distance regularized level set evolution and its application to image segmentation, IEEE transactions on image processing, 19 (2010), pp. 3243–3254. li2008level X. Li, H. Huang, and P. Meakin, Level set simulation of coupled advection-diffusion and pore structure evolution due to mineral precipitation in porous media, Water Resources Research, 44 (2008). liu1994weighted X.-D. Liu, S. Osher, and T. Chan, Weighted essentially non-oscillatory schemes, Journal of Computational Physics, 115 (1994), pp. 200–212. marrone2010fast S. Marrone, A. Colagrossi, D. Le Touzé, and G. Graziani, Fast free-surface detection and level-set function definition in SPH solvers, Journal of Computational Physics, 229 (2010), pp. 3652–3663. marz2012calculus T. Marz and C. B. Macdonald, Calculus on surfaces with general closest point functions, SIAM Journal on Numerical Analysis, 50 (2012), pp. 3303–3328. mauch2000fast S. Mauch, A fast algorithm for computing the closest point and distance transform, (2000). monaghan2005smoothed J. J. Monaghan, Smoothed particle hydrodynamics, Reports on Progress in Physics, 68 (2005), p. 1703. morris2000simulating J. P. Morris, Simulating surface tension with smoothed particle hydrodynamics, International Journal for Numerical Methods in Fluids, 33 (2000), pp. 333–353. ngo2022high L. C. Ngo, Q.-N. Dinh, and H. G. Choi, High-order level set reinitialization for multiphase flow simulations based on unstructured grids, Computers & Mathematics with Applications, 120 (2022), pp. 60–77. osher1988 S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations, Journal of Computational Physics, 79 (1988), pp. 12–49. quentrec1973new B. Quentrec and C. Brot, New method for searching for neighbors in molecular dynamics computations, Journal of Computational Physics, 13 (1973), pp. 430–432. raviart1985analysis P.-A. Raviart, An analysis of particle methods, in Numerical Methods in Fluid Dynamics: Lectures given at the 3rd 1983 Session of the Centro Internationale Matematico Estivo (CIME) held at Como, Italy, July 7–15, 1983, Springer, 1985, pp. 243–324. reboux2012self S. Reboux, B. Schrader, and I. F. Sbalzarini, A self-organizing Lagrangian particle method for adaptive-resolution advection–diffusion simulations, Journal of Computational Physics, 231 (2012), pp. 3623–3646. ruuth2008simple S. J. Ruuth and B. Merriman, A simple embedding method for solving partial differential equations on surfaces, Journal of Computational Physics, 227 (2008), pp. 1943–1961. saye2014 R. Saye, High-order methods for computing distances to implicitly defined surfaces, Communications in Applied Mathematics and Computational Science, 9 (2014), pp. 107 – 141. sbalzarini2006simulations I. F. Sbalzarini, A. Hayer, A. Helenius, and P. Koumoutsakos, Simulations of (an-) isotropic diffusion on curved biological surfaces, Biophysical journal, 90 (2006), pp. 878–885. sethian1996fast J. A. Sethian, A fast marching level set method for monotonically advancing fronts, Proceedings of the National Academy of Sciences, 93 (1996), pp. 1591–1595. sethian2003level J. A. Sethian and P. Smereka, Level set methods for fluid interfaces, Annual review of fluid mechanics, 35 (2003), pp. 341–372. sussman1994level M. Sussman, P. Smereka, and S. Osher, A level set approach for computing solutions to incompressible two-phase flow, Journal of Computational Physics, 114 (1994), pp. 146–159. tsitsiklis1995efficient J. N. Tsitsiklis, Efficient algorithms for globally optimal trajectories, IEEE Transactions on Automatic Control, 40 (1995), pp. 1528–1538. minter S. K. T. Veettil, minter, <https://git.mpi-cbg.de/mosaic/software/math/minter>. veettil2022multivariate S. K. T. Veettil, Y. Zheng, U. H. Acosta, D. Wicaksono, and M. Hecht, Multivariate polynomial regression of Euclidean degree extends the stability for fast approximations of Trefethen functions, 2022, <https://arxiv.org/abs/2212.11706>. verlet1967computer L. Verlet, Computer "experiments" on classical fluids. I. Thermodynamical properties of Lennard-Jones molecules, Physical Review, 159 (1967), p. 98. wendland1995piecewise H. Wendland, Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree, Advances in Computational Mathematics, 4 (1995), pp. 389–396. williams1998accuracy M. Williams, D. Kothe, and E. Puckett, Accuracy and convergence of continuum surface tension models, Fluid Dynamics at Interfaces, (1998), pp. 294–305. zago2018semi V. Zago, G. Bilotta, A. Hérault, R. A. Dalrymple, L. Fortuna, A. Cappello, G. Ganci, and C. Del Negro, Semi-implicit 3D SPH on GPU for lava flows, Journal of Computational Physics, 375 (2018), pp. 854–870. zhao2005fast H. Zhao, A fast sweeping method for Eikonal equations, Mathematics of computation, 74 (2005), pp. 603–627. zheng2005nonlinear X. Zheng, S. Wise, and V. Cristini, Nonlinear simulation of tumor necrosis, neo-vascularization and tissue invasion via an adaptive finite-element/level-set method, Bulletin of Mathematical Biology, 67 (2005), pp. 211–259.
http://arxiv.org/abs/2306.08114v1
20230613200723
An Interval Arithmetic Approach to Input-Output Reachability
[ "Ivan Perez Avellaneda", "Luis A. Duffaut Espinosa" ]
eess.SY
[ "eess.SY", "cs.SY" ]
CipherSniffer: Classifying Cipher Types Brendan Artley, Greg Mehdiyev Simon Fraser University Burnaby, BC, Canada July 31, 2023 ======================================================================================================= § INTRODUCTION
http://arxiv.org/abs/2306.01491v1
20230602123414
Learning Local to Global Feature Aggregation for Speech Emotion Recognition
[ "Cheng Lu", "Hailun Lian", "Wenming Zheng", "Yuan Zong", "Yan Zhao", "Sunan Li" ]
cs.SD
[ "cs.SD" ]
Concurrent Classifier Error Detection (CCED) in Large Scale Machine Learning Systems Pedro Reviriego, Ziheng Wang, Álvaro Alonso, Zhen Gao, Farzad Niknia, Shanshan Liu and Fabrizio Lombardi P. Reviriego is with Universidad Politécnica de Madrid, 28040 Madrid, Spain. Email: [email protected]. Z. Wang is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected]. A. Alonso is with Universidad Politécnica de Madrid, 28040 Madrid, Spain. Email: [email protected]. Z. Gao is with Tianjin University, Tianjin 300072, China. Email: [email protected]. F. Niknia is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected]. S. Liu is with New Mexico State University, Klipsch School of ECE, Las Cruces, NM 88003, USA. Email: [email protected]. F. Lombardi is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected]. July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Transformer has emerged in speech emotion recognition (SER) at present. However, its equal patch division not only damages frequency information but also ignores local emotion correlations across frames, which are key cues to represent emotion. To handle the issue, we propose a Local to Global Feature Aggregation learning (LGFA) for SER, which can aggregate long-term emotion correlations at different scales both inside frames and segments with entire frequency information to enhance the emotion discrimination of utterance-level speech features. For this purpose, we nest a Frame Transformer inside a Segment Transformer. Firstly, Frame Transformer is designed to excavate local emotion correlations between frames for frame embeddings. Then, the frame embeddings and their corresponding segment features are aggregated as different-level complements to be fed into Segment Transformer for learning utterance-level global emotion features. Experimental results show that the performance of LGFA is superior to the state-of-the-art methods. Index Terms: speech emotion recognition, Transformer, time-frequency feature, frame-level, segment-level § INTRODUCTION Speech emotion recognition (SER) is a significant task of affective computing and has attracted wide attention in recent years <cit.>, <cit.>. The key to addressing the SER is how to disentangle the emotion information hidden in speech from the confusion of diverse acoustic factors <cit.>, <cit.>, <cit.>, e. g., background noise, language, speaker identity. Actually, the emotional information is always discretely distributed in frames or segments of speech <cit.>, <cit.> due to the presence of special frames or segments without emotional contexts, i .e., empty frames/segments. In other words, emotion information is always discretely distributed in some key frames or segments. Therefore, a practical approach is to capture long-range emotion dependencies from these key frames/segments <cit.>, <cit.>, <cit.>, <cit.>. To this end, Recurrent Neural Networks (RNNs) <cit.>, <cit.> are widely adopted for learning utterance-level emotion features from frame-level or segment-level features. Although previous works based on RNNs, e. g., LSTM and Bi-LSTM, have achieved great success on SER, they still encounter some issues <cit.>, e. g., high time and space complexity for computing cells and only modeling sequential long-term dependencies (from forward to backward, or reverse). With the emergence of Transformer <cit.>, these issues have been handled effectively. In Transformer, the Multihead Self-Attention can describe the complete relationship between all speech frames/segments. Also, the time-space complexity could be effectively reduced by the matrix parallel calculation. Taking these advantages, the Speech Transformer models <cit.>, <cit.> are promisingly developed from the Vision Transformer (ViT) <cit.>. However, Speech Transformer roughly divides the speech spectrogram into same "chunks" <cit.> (i. e., patches in ViT), leading to lossing local inter-frame relationships reflecting the fine-gained emotion distribution and corruption of frequency domain information. Since the frame-level and segment-level features contain the emotional information at different scales <cit.>, <cit.>, e.g., frames reflect the phoneme-level associations and segments respond to the word-level or phrase-level correlations, they should be aggregated complementarily to learn more emotion-discriminative speech features. Likewise, ViT also ignores the local structure information in image patches for computer vision. To handle the similar issue, Han et al. <cit.> proposed a Transformer in Transformer (TNT) to simultaneously learn inter-patch and intra-patch relationships. Inspired by TNT <cit.>, we propose a novel Local to Global Feature Aggregation learning (LGFA) method for SER. The LGFA nests a Frame Transformer inside a Segment Transformer to aggregate different-scale emotion dependencies for the speech emotion representation. The whole learning processing of LGFA is from frame-level to segment-level to utterance-level. Compared with other Speech Transformer-based methods, our LGFA is a novel and special Transformer-based model for SER and its advantages can be summarized as the following three folds: * it aims to capture long-range emotion-related dependencies at different scales both inside frames and segments instead of the simple image patches adopted in Transformer. * it takes a frame and a segment as the input of Frame Transformer and Segment Transformer, respectively, instead of equally divided image patches. In this case, the frame and segment used in LGFA may contain the entire frequency domain information such that the frequency feature will not be damaged in the speech chunk division. * it also can be extended from the time domain to the frequency domain and time-frequency domain by different patch partition strategies. This extension can make full use of the time-frequency characteristic of speech signals to represent emotion information. § PROPOSED METHOD Considering the inter-frame time property of speech, LGFA feeds a Frame Transformer with frame features, then integrates frame embeddings and segment features as the segment-level aggregation features. This point is the main difference from TNT. Further, these aggregation features are regarded as the input of a Segment Transformer to learn higher-level emotion correlations across segments. Consequently, we can obtain the global utterance-level features of speech emotions through joint training of the Frame and Segment Transformers. The overview of LGFA is shown in Figure. <ref>, in which the Frame Transformer takes the frame-level feature of speech as the input. To this end, we firstly process the frame-level feature of speech. Given the log-Mel-spectrogram feature x∈ℝ^F × T × C of each emotional speech, the i^th frame x_i∈ℝ^F × C of the spectrogram x={x_i}_i=1^T is firstly encoded by a linear projection layer FC(·) as the i^th frame embedding x'_i ∈ℝ^1 × d_f, denoted as x'_i = FC(x_i), where F, T, and C represent the numbers of Mel-scaled frequency, time frame and channel, respectively. d_f is the dimension of frame embeddings. Then, to enhance inductive bias of Frame Transformer <cit.>, we add a learnable position encoding e^f_i ∈ℝ^1 × d_f into x'_i as the input of Frame Transformer, which can be represented as x'_i ←x'_i + e^f_i, where e^f={e^f_i}_i=1^T ∈ℝ^T × d_f. In Frame Transformer, the sequence of speech frame embeddings x'={x'_i}_i=1^T is utilized to characterize local inter-frame correlations of emotions. Then, the frame-level encoding x̂ can be obtained by the frame embedding sequence x' through the following operations: x”^,ℓ = MSA(LN(x'^,ℓ-1)) + x'^,ℓ-1, x̂^ℓ = MLP(LN(x”^,ℓ)) + x”^,ℓ, where ℓ∈ [1,...,L] is the index of the stacked block, L is the number of blocks in Frame Transformer, and x̂^ℓ∈ℝ^T × d_f is encoded by the ℓ^th block. Besides, in Equation (<ref>) and (<ref>), MSA(·), MLP(·) and LN(·) are the operations of Multihead Self-Attention (MSA), MultiLayer Perceptron (MLP), and Layer Normalization (LN), respectively, according to <cit.>, <cit.>. Notably, x'^,0=[x'_1, x'_2,...,x'_T] ∈ℝ^T × d_f in Equation (<ref>) is the initial input of the frame embedding sequence x'. To aggregate the emotion-related dependencies at different scales, we further design a Segment Transformer to learn frame-level and segment-level correlations of speech emotion. Therefore, the input of Segment Transformer is the combination of the frame-level encoding x̂ and segment-level embedding s. Specifically, the log-Mel-spectrogram feature x can be divided into a segment set, where each segment s_j∈ℝ^k × F × C consists of k frames, represented as x={s_j}_j=1^T/k. Similar to the Frame Transformer, each segment s_j is firstly transformed to the segment embedding s'_j ∈ℝ^1 × d_s by a linear projection layer FC(·) in Segment Transformer. Besides, the k^th frame-level encoding x̂^s_j ∈ℝ^k × d_f corresponding to the j^th segment are also used to aggregate into the segment embeddings after another linear projection FC(·), where FC(·) is to ensure dimension match for the addition of frame encoding and segment embedding. Then, the combination embedding s”_j ∈ℝ^1 × d_s of frame-level encoding and segment embeddings is generated by s^'_j = FC(Vec(s_j)), x̂^s_j = [x̂_i+1,x̂_i+2,..,x̂_i+k], s”_j = s'_j + FC(Vec(x̂^s_j)), where Vec(·) is a vectorization operation to flatten the dimension of x̂_j^s or s_j to ℝ^1 × (k × d_f). Then, we also add a learnable class token s_cls into input sequence for the final emotion classification. Eventually, the segment-level embedding s”∈ℝ^(T/k+1) × d_s can be written to s”=[s_cls, s”_1, s”_2, ..., s”_T/k]. Similar to Frame Transformer, each segment-level embedding with frame-level aggregation is added the corresponding positions between segments to preserve time-sequence property of inductive bias on speech by a learnable position encoding e^s_j∈ℝ^1 × d_s, which can be denoted as s”_j ←s”_j + e^s_j, where e^s={e^s_j}_j=1^T/k∈ℝ^(T/k+1) × d_s. The Segment Transformer also adopts L stacked standard transformer blocks to encode the aggregation embedding for the utterance-level representation of speech emotion, where the ℓ^th block transformations are formalized to s̅^ℓ = MSA(LN(s”^,ℓ-1)) + s”^,ℓ-1, ŝ^ℓ = MLP(LN(s̅^ℓ)) + s̅^ℓ, where s”^,0 is the initial segment embedding sequence in Equation (<ref>). With all the above operations, our proposed LGFA firstly models local emotion correlations within frames by Frame Transformer I(·), then aggregates the frame-level encoding ŝ and segment embeddings s” to capture global longer-dependencies for the utterance-level emotion representation ŝ through Segment Transformer O(·), which can be denoted as ŝ = O(s”;I(x')), Furthermore, the class token ŝ_cls can be generated from ŝ to input the classifier for speech emotion prediction, represented as y_pred=C(ŝ_cls), where y_pred, C, and ŝ_cls are the predicted labels of emotions, classifier, and s_cls generated by LGFA, respectively. Note that the segment class token s_cls, frame position encoding e^f and segment position encoding e^s are all initialized as zeros in the letter. § EXPERIMENTS In the section, we will introduce the details of our implemented experiments, then discuss the comparison results of the proposed LGFA with state-of-the-art methods. Database: To evaluate the performance of our proposed LGFA, two public emotional speech databases are selected to implement the experiments, i. e., the Interactive Emotional Dyadic Motion Capture database (IEMOCAP) <cit.> and the China Emotional Database (CASIA) <cit.>. In detail, IEMOCAP is an English multimodal database containing video, speech, and text scripts, which is recorded in 5 sessions (1 male and 1 female in each session) by inducing diverse emotions (angry, happy, sad, neutral, frustrated, excited, fearful, surprised, disgusted, and others) of 10 actors under improvised or scripted scenarios. CASIA is a Chinese Emotional Speech Database with 9 600 recording files under 6 emotions (angry, fear, happy, neutral, sad, and surprise). It is collected by inducing 4 actors (2 males and 2 females) to express 6 emotions under several fixed text contents. Note that we adopt 2 280 improvised samples and 4490 scripted+improvised samples with 4 emotions (angry, happy, sad, and neutral) in IEMOCAP, and 1 200 public released samples with 6 emotions in CASIA for experiments. Experimental Settings: In our experiments, all speech sentences are re-sampled to 16 kHz for Short-Time Fourier Transform (STFT) using 20 ms Hamming window size with 50% frame overlapping. Then, they are divided into segments with 128 frames as experimental samples and pad 0 for the segment less than 128 frames. Finally, we obtain the log-Mel-spectrogram with the dimension of ℝ^64 × 128 × 1 for the input of our LGFA, where the number of Mel-filter is set as 64. For the network of LGFA, the input sizes of Frame Transformer and Segment Transformer are assigned as (64,128,1) and (64,8,1). The number of stacked blocks L is 7. Furthermore, the projection dimensions and the head number of the Frame Transformer are set as 16 and 4, and they are assigned 256 and 4 in the Segment Transformer. The LGFA is implemented by PyTorch with NVIDIA A10 GPUs. And it is optimized by the AdamW Optimizer with a learning rate of 0.0001 and trained from scratch with a batch size of 64. In addition, the Leave-One-Subject-Out (LOSO), i. e., k-fold cross-validation protocol (CV), is adopted for a fair comparison according to <cit.>, <cit.>, where k is the speaker number of dataset. Therefore, the speaker rate of training and testing data in IEMOCAP and CASIA are 9:1 and 3:1, respectively. Furthermore, since the IEMOCAP are class-imbalanced, the weighted average recall (WAR) and the unweighted average recall (UAR) <cit.>, <cit.> are used to effectively evaluate the performance of the proposed method, where WAR is standard recognition accuracy while UAR is the class-wise accuracy. Results and Analysis: We compare the performance of our proposed LGFA with several state-of-the-art methods on IEMOCAP, i. e., CNN+LSTM Model <cit.>, DNN-HMM based model (DNN-HMM_SGMM-Ali.) <cit.>, CNN model with spectrogram (model-2A(spectrogram)) <cit.>, fusion model with different acoustic features (Model-3 (fusion) and Model-1 (dow.+ens.)) <cit.>. The above methods are all implemented on the improvised data (2280 samples). To further demonstrate the performance of LGFA, we also compare the LGFA with other methods (i. e., Bi-LSTM and Greedy+Dro.+Att.+MLP) <cit.> on the scripted+improvised data (4490 samples). Moreover, we also choose other comparison methods on CASIA, i. e., LLDs with dimension reduction (LLD+DR) <cit.>, DNNs with the extreme learning machine (DNN+ELM) <cit.>, weighted spectral feature learning model (HuWSF) <cit.>, and DCNN with discriminant temporal pyramid matching (DTPM) <cit.>. As homologous methods to LGFA, ViT <cit.> and TNT <cit.> were also used as comparasion methods. Note that the results of DTPM, ViT, and TNT are obtained through our own implementations with the released codes[https://github.com/tzaiyang/SpeechEmoRec]^,[https://github.com/lucidrains/vit-pytorch]^,[https://github.com/huawei-noah/CV-Backbones/tree/master/tnt_pytorch]. In addition, to evaluate the experimental performance more comprehensively, these selected comparison methods are based on two commonly used experimental protocols on IEMOCAP, i. e., 10-fold LOSO based on speakers and 5-fold LOSO based on sessions. For example, Bi-LSMT, Greedy+Dro.+Att.+MLP, CNN+LSTM, DNN-HMM_SGMM-Ali., ViT, TNT and our proposed LGFA are all based 10-fold CV, other methods are based on 5-fold CV. The experimental results with WAR and UAR on IEMOCAP are shown in Table <ref>, where ViT and TNT are implemented by the spectrogram size of 128×128 and the chunk size of 16×16 according to <cit.>, <cit.>. From these results, it is obvious that the proposed LGFA achieves the competitive performance on both WAR and UAR. Specifically, based on the scripted+improvised data, our LGFA improves the accuracies (6.25% on WAR and 7.82% on UAR) than comparison methods. Based on the improvised data, LGFA is superior to RNN-based methods (i. e., CNN+LSTM), demonstrating the advantage of the Transformer-based methods in SER. Further, its results also outperform the ViT and TNT, which reveals LGFA effectively capture the long-range emotion dependencies inside frames and segments for better speech representation and is more suitable for the task of SER than ViT and TNT. Although our LGFA achieve the best performance, the UAR results are lower than the WAR ones on comparison methods because of the class-imbalance in IEMOCAP. The results on CASIA, illustrated in Table <ref>, also reveal the superiority of our LGFA (improving 3.17% on WAR and UAR). It is better than traditional methods (i. e., LLD+DR and HuWSF) and DNN-based approaches (i. e., DNN_ELM and DTPM). Similar to the results on IEMOCAP, our proposed LGFA proves its superiority on the SER again over ViT and TNT. Since CASIA is class-balanced, the results of WAR are equal to those of UAR. Furthermore, to explore the effective components of LGFA, we implement extended experiments to analyze different architectures of our LGFA. Figure. <ref> shows the results of ablation study, where ViT, Frame Transformer, and Segment Transformer are implemented by square chunks with the size of 16×16, frame chunks with the size of 64×1, segment chunks with the size of 64×8, respectively. The ablation results in Figure. <ref> indicate that LGFA is superior in speech emotion representation over other architectures. Namely, our designed frame and segment aggregation learning is more suitable for SER than current Speech Transformers. Furthermore, the Segment Transformer outperforms the Frame Transformer, indicating larger chunks will promote the feature extraction of speech emotion for the Transformer. Discussion on the extension of LGFA: In LGFA, to preserve the completeness of the frequency domain in the spectrogram, we divide the spectrogram feature as chunks only on the time domain. To further explore the effect of different chunk division strategies, we extend the chunk division of the proposed LGFA (i. e., LGFA_T in Table <ref>) from the time domain to the frequency and time-frequency domain (i. e., LGFA_F and LGFA_TF in Table <ref>). Compared with LGFA_T, LGFA_F takes each frequency band as a frame and each frequency band group as a segment to learn the sentence-level emotion feature from the frequency domain. Thus, we can obtain the frequency-wise class token ŝ_cls^fre of LGFA_F for the emotions prediction represented as y_pred=C(ŝ_cls^fre). Further, we will also complementarily combine the chunk division methods in the frequency and time domains to generate the fusion class token of LGFA_TF ŝ_cls^fu=cat(ŝ_cls,ŝ_cls^fre) to the emotion classifier y_pred=C(ŝ_cls^fu), where cat(·) is the concatenation operation on the feature dimension. The experimental results of different chunk division strategies are shown in Table <ref>. From them, we observe that LGFA_T and LGFA_TF outperform LGFA_F, which may be due to the fact that speech emotion is closely related to the context within frames or segments. While in the frequency domain, not all emotions have obvious energy activations between frequency bands. Furthermore, the LGFA_TF outperforms LGFA_T on CASIA, while performs worse on IEMOCAP. The reason may be that chunk division in the frequency domain will not only complement the time-domain chunk division but may also integrate noise caused by the uncertain correlations on the frequency domain under emotions. The recording environment of CASIA contains less noise, while IEMOCAP is recorded in a open dialogue environment. Thus, the noise will affect frequency-domain correlations and impair the performance of the time-frequency fusion model. In other word, the frequency information should be screened to obtain this supplement. § CONCLUSIONS We propose a novel Local to Global Feature Aggregation (LGFA) method for SER. LGFA integrates a Frame Transformer into a Segment Transformer to aggregate local emotion correlations at different scales both within frames and segments for the global utterance-level representation of emotional speech. Through the joint learning of two Transformers, we can obtain discriminative emotion features to learn speech emotion representation from frame-level to segment-level to sentence-level. Extensive experimental results on IEMOCAP and CASIA demonstrate the superiority of our proposed LGFA. Further, we will deeply explore the different chunk division strategies of LGFA for the better SER performance. § ACKNOWLEDGEMENTS This work was supported in part by NSFC under Grant U2003207, in part by National Key R&D Project under Grant 2022YFC2405600, in part by Jiangsu Frontier Technology Basic Research Project under Grant BK20192004, and in part by Zhishan Young Scholarship of Southeast University. IEEEtran
http://arxiv.org/abs/2306.03076v1
20230605175244
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware
[ "Lakshmi Nair", "Darius Bunandar" ]
cs.LG
[ "cs.LG", "cs.AR" ]
Modified metrics of acoustic black holes: A review E. Passos ================================================== Existing methods to recover model accuracy on analog-digital hardware in the presence of quantization and analog noise include noise-injection training. However, it can be slow in practice, incurring high computational costs, even when starting from pretrained models. We introduce the Sensitivity-Aware Finetuning (SAFT) approach that identifies noise sensitive layers in a model, and uses the information to freeze specific layers for noise-injection training. Our results show that SAFT achieves comparable accuracy to noise-injection training and is 2× to 8× faster. § INTRODUCTION Recent advances in analog-digital hardware is motivated by improving energy and speed efficiency for deep learning applications. However, such devices are often susceptible to effects of analog noise and reduced precision (quantization) which impacts the final model accuracy. One of the commonly used approaches for tackling this issue includes noise-injection training. Here, the model is subjected to the perturbations caused by quantization and/or analog noise, by injecting some representative noise into the model's layers during training, to recover accuracy <cit.>. Prior work has shown that loss of precision due to quantization can also be treated as “noise” and that models can be made resilient to this loss by retraining with noise injection, where the injected noise is proportional to the precision loss <cit.>. However, noise-injection training can incur significant training time, even when starting from pretrained models. The speed of training can be potentially improved by training only a subset of the layers that are highly sensitive to noise, while freezing (i.e., disable weight updates) the rest. As shown in Figure <ref> for ResNet50, some weights change substantially during noise-injection training, while others hardly change and could potentially be frozen (conceptually similar to transfer learning <cit.>). While prior work has looked at similar methods for speeding up training, they either focus on BERT-like models <cit.>, or rely on domain knowledge to identify noise sensitive layers <cit.>. Motivated by these observations, we seek to answer the question: “Starting from a pretrained model, can we identify which layers are the most sensitive to noise, and retrain just those?”. Prior work in quantization has used KL-divergence to identify layers that are sensitive to quantization <cit.>. We present an alternate metric for measuring layer sensitivity to noise, by computing the standard deviation of the output differences between the noisy/quantized model and the noise-free/unquantized model at each layer. We then introduce the Sensitivity-Aware Finetuning (SAFT) approach based on noise-injection training, that selects specific layers for training based on the layer sensitivity analysis. § SENSITIVITY-AWARE FINETUNING The motivation behind SAFT is the observation that after noise-injection training, the parameters of a model change significantly only for specific layers, e.g., Figure <ref> shows this for ResNet50. Keeping such layers noise-free in the model results in larger accuracy improvements. This possibly indicates that the layers whose parameters change the most have higher noise sensitivity. Hence, we seek to retrain only the most noise sensitive layers to validate this hypothesis. Our approach takes a pretrained model M, its noisy version N, and a sample batch of inputs X from the training data. Note that N refers to the model used during standard noise-injection training <cit.>, where we inject noise into the weights during the forward pass to perturb the outputs. The input data is first passed through M, and the inputs and outputs at every layer are stored. Then, the inputs at every layer of the original model M are passed through the corresponding layers of N (See Appendix Algorithm 1). The layer outputs for N are also saved, and the standard deviation[Noise mean is typically zero based on hardware models <cit.>] of the differences between the outputs of M and N are computed per-layer. The process flow is shown in <ref>. Once the standard deviations are computed, SAFT involves: a) Identifying the top k layers with the highest standard deviations; b) Selectively training only the top k layers of N while freezing the parameters of the remaining layers. We note that k is an additional hyperparameter. The value of k can be determined based on visualizing the standard deviation values in a plot, or it can be treated as the other hyperparameters and set using tools such as Tune <cit.>. Another consideration here is the batch size used for computing the statistics. The batch size should be sufficiently large to obtain a reasonable estimate of the noise sensitivity[We find that using the batch sizes typically used for training, works well in most cases]. When large batches cannot be processed, data samples can be processed individually and stacked. The statistics can then be accumulated over the stack. For training, we use the procedure in <cit.>, and apply the backward gradient updates to noise-free weights. Computational complexity of standard deviation based layer sensitivity analysis: Existing layer sensitivity approaches start with the un-quantized model and proceed by quantizing a single layer at a time, evaluating the model accuracy in each case <cit.>. The layers are then sorted in decreasing order of sensitivity and the most sensitive layers are skipped during quantization <cit.>. Existing software packages such as OpenVino have introduced the Accuracy Aware Algorithm, a slightly modified version of the approach starting with the quantized version of the model <cit.>. However, these approaches are brute-force methods that can take O(Nt) time for a model with N layers and evaluation time t for the sample of data. In contrast, the standard deviation based sensitivity analysis only requires a sample of data to be passed once through the model for a reduced complexity of O(t). For models with large N this can lead to significant speed improvements in layer sensitivity analysis. § EXPERIMENTS We evaluate SAFT on eight different models. In all cases, similar to prior work in <cit.> we only apply noise to the matrix-multiplication layers (such as Convolutions, Linear etc.), and leave other layers such as batchnorm or activation layers as noise-free. We also evaluate the use of KL-divergence as an alternate metric to standard deviation in our experiments. Similar to prior work, we evaluate SAFT with simulated hardware noise using both multiplicative and additive noise, wherein noise is injected into the weights <cit.>. We sample the noise N from both a Gaussian distribution with zero mean as in prior work <cit.>, N ∼𝒩(0, σ), and from a Uniform distribution N ∼ U[-r_1, r_1]. Our baseline noise-injection is implemented similar to the approach in <cit.>. The parameters of the noise distributions for the different models are shown in Appendix Table <ref>. The specific noise parameters were chosen so as to result in a drop in the performance of all the models, which can then be recovered through training. Note that we set a fixed seed for all our training runs to ensure fair comparison. For SAFT, we compute the standard deviation values on a single batch of training data. We freeze total - k layers in a model during training, retraining only k. We determined k empirically by visualizing the standard deviation plots and checking the number of layers that have a relatively high noise standard deviation. We seek to standardize this procedure in our future work. Table <ref> in the Appendix shows the batch size, and k values (#Frozen = #Total - k) used in our experiments. Some models require more layers to be trained than others owing to higher noise in more layers. Our experiments evaluate: Given the exact same training parameters, does SAFT perform similar to baseline noise-injection training? Note that our research question compares our approach to noise-injection training, rather than to obtain a predefined target performance, which noise-injection training has already been shown to achieve with sufficient epochs <cit.>. In our training experiments, we only train for a few epochs (1-5 epochs) to see if the performances of the two approaches match, whereas achieving close to the baseline noise-free FP32 performance takes many more epochs <cit.>. § RESULTS Noise standard deviation plots for four models are shown in Figure <ref>. Stars ⋆ indicate the layers that are trained, while the remaining are frozen. Similar to prior findings, the first and last set of layers in vision models exhibit high sensitivity <cit.>. We also see a “sawtooth” pattern in the vision models like ResNet, corresponding to the repeating blocks in the network, consistent with observations in prior work <cit.>. For MobileNet v3, quite a few of the convolution layers have a higher noise standard deviation compared to ResNet50. For Faster RCNN, we see that several layers in the “head” of the model, responsible for predicting the bounding box locations, are particularly sensitive. For Bert base, the sensitivity is quite spread out across the model with several layers exhibiting high noise sensitivity. Specifically, 10 of the trained 20 layers are self-attention layers, with the remaining 10 being intermediate and output dense layers. For models like Bert, identifying the most sensitive layers can be tricky and a larger proportion of layers have to be trained compared to other models. The corresponding training speed improvements are shown in Tables <ref> and <ref>, where up to 8× speed improvements in training can be observed. In the case of a few models such as ResNet18 and RCNN, speed up of about 2× and 3× is observed. The actual amount of speedup depends on the processing time of each layer, which in turn depends on the size of the layer (i.e., # of parameters). Hence, a direct correlation between size of the model (i.e., # of layers) and the speedup is difficult to establish. The final performance of SAFT in terms of the model metrics is shown in Tables <ref> and <ref> for Gaussian and Uniform noise respectively. We see that SAFT (with k frozen layers) closely matches[Our results were confirmed with the Wilcoxon Signed Rank test (α = 0.05)] the performance of the full noise-injection training approach for all noise models, leading to improvements in terms of the metrics. An interesting finding here is that specific layers that do not form a continuous sequence, can be independently trained. Typically in transfer learning a continuous sequence of the last few layers, such as the last few convolutional and fully-connected layers, are often retrained <cit.>. Lastly, we train a few models with Gaussian noise injection, by using KL-divergence for selecting the layers to freeze as opposed to using standard deviation (See Table <ref>). The results clearly show that freezing layers based on their standard deviations outperforms KL-divergence based layer selection. Furthermore, computation of KL-divergence requires converting the activations into probability distributions, which can be avoided in the case of the standard deviation based method. Interestingly, using KL-divergence did not improve performance on vision models, although it did perform well on Bert base. It is possible that since most layers in Bert base have high noise sensitivity (See Figure <ref>), KL-divergence chose and trained some of the noisiest layers, whereas the noisiest layers are much more specific and localized in the case of the vision models. Especially in such cases, KL-divergence could not perform as well as standard deviation in identifying these more specific layers. § CONCLUSIONS AND FUTURE WORK We introduced Sensitivity-Aware Finetuning (SAFT) for fast finetuning of pretrained models to deal with noise. SAFT computes layer sensitivity using standard deviations to freeze some layers. SAFT performs comparably to noise-injection training in terms of accuracy, while being faster at training. In the future, we will investigate additional metrics for SAFT, including combinations of metrics like standard deviation and KL-divergence. We will investigate techniques for easily identifying the k hyperparameter used in SAFT. We believe the layer sensitivity analysis can also be used for performing Partial Quantization and Quantization-Aware Training <cit.> in future work. IEEEtran § APPENDIX
http://arxiv.org/abs/2306.09263v1
20230615163855
Two sided ergodic singular control and mean field game for diffusions
[ "Sören Christensen", "Ernesto Mordecki", "Facundo Oliú Eguren" ]
math.OC
[ "math.OC", "math.PR", "91A16" ]
Two sided ergodic singular control and mean field game for diffusions Sören Christensen[Department of Mathematics, Christian-Albrechts-Universität Kiel, Germany. email: [email protected]] Ernesto Mordecki[Centro de Matemática. Facultad de Ciencias, Universidad de la República, Montevideo, Uruguay. Corresponding author. email: [email protected]] Facundo Oliú Eguren[Ingeniería Forestal. Centro Universitario de Tacuarembó, Universidad de la República, Uruguay. email: [email protected]] July 31, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================= Consider two independent controlled linear diffusions with the same dynamics and the same ergodic controls, the first corresponding to an individual player, the second to the market. Let us also consider a cost function that depends on the first diffusion and the expectation of the second one. In this framework, we study the mean-field game consisting in finding the equilibrium points where the controls chosen by the player to minimize an ergodic integrated cost coincide with the market controls. We first show that in the control problem, without market dependence, the best policy is to reflect the process within two boundaries. We use these results to get criteria for the optimal and market controls to coincide (i.e., equilibrium existence), and give a pair of nonlinear equations to find these equilibrium points. We also get criteria for the existence and uniqueness of equilibrium points for the mean-field games under study. These results are illustrated through several examples where the existence and uniqueness of the equilibrium points depend on the values of the parameters defining the underlying diffusion. Keywords: Ergodic singular control, Diffusions, Mean field games, Nash equilibrium § INTRODUCTION In recent years, mean-field game theory has emerged as a powerful framework for modeling the behavior of large populations of interacting agents in a stochastic environment. This interdisciplinary field lies at the intersection of mathematics, economics, and engineering, offering deep insights into complex systems characterized by strategic interactions. Mean field game models have found applications in various domains, including for instance, finance, energy systems <cit.>, or traffic management and social dynamics <cit.>. The study of mean field games typically involves analyzing the behavior of a large number of identical agents who aim to optimize their individual objectives while accounting for the average behavior of the population. In the present paper we incorporate a mean-field game model in the two-sided ergodic singular control problem for Itô diffusions, giving new insights in the general framework described above. The control problem has been extensively studied and analyzed in the literature, see for example <cit.>, <cit.>, and <cit.>. Regarding applied work, we mention studies focusing on cash flow management that explore optimal dividend distribution, recapitalization, or a combination of both while considering risk neutrality. These investigations also fall under the domain of singular control problems, see for instance <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Recently, there have been advances in the more particular case where the nature of the problem makes controls of bounded variation to appear naturally, also called two-sided average singular control ergodic problem, see for example the works by <cit.> and <cit.>. In this second paper, the author restricted the problem to reflecting barriers, that is, the policies allowed are the ones that reflect in thresholds. Our main objective is to investigate the interplay between two-sided ergodic singular control and mean field game theory under the framework proposed by <cit.>, considering a class of more general controls. Our results indicate how individual agents can make strategic decisions in the presence of a large population while accounting for the impact of their own control on the average behavior. To achieve our goals, we postulate a verification theorem in the form of a Hamilton-Jacobi-Bellman equation and use the ergodic properties of the controlled processes to turn the mean-field game into an analytic problem. As a consequence, we obtain necessary and sufficient conditions for the existence of mean-field game equilibrium points, and, for more restricted families of cost functions, uniqueness for thresholds strategies. Finally, we define an N-player problem and prove that it approximates our ergodic mean field game as the number of players goes to infinity. This means that the mean-field equilibrium is an approximate Nash equilibrium for the N-player game. The rest of the paper is organized as follows. In Section <ref> we study the control problem. After introducing the necessary tools, we state and prove the main result of the section, i.e. the optimality of reflecting controls obtained within the class of càdlàg controls. In Section <ref> we consider the mean-field game problem. It adds the complexity of a two-variable cost function where the second variable represents the market. The main result consists of a set of conditions for the existence and uniqueness of equilibrium strategies, containing also a particular analysis when the cost function is multiplicative. Section <ref> presents three examples that illustrate these results. Section <ref> contains approximation results when the market consists of N players. A final appendix includes some auxiliary computations corresponding to the last two sections. § CONTROL PROBLEM In this section, we consider the one-player control problem. We first recall results obtained by <cit.> that play a fundamental rôle along the paper. These results consist in the determination of optimal control levels in an ergodic framework for a diffusion within the class of reflecting controls. After this, we prove that the optimum levels found in <cit.> give in fact the optimum controls within the wider class of finite variation càdlàg controls. Let us consider a filtered probability space (Ω,P,{ℱ_t },ℱ) that satisfy the usual assumptions. In order to define the underlying diffusion consider the measurable functions μ→ and σ→ assumed to be locally Lipschitz. Under these conditions the stochastic differential equation dX_t=μ (X_t)dt+ σ(X_t)dW_t , X_0=x_0 has a unique strong solution up to an explosion time, that we denote X={X_t t≥ 0} (see <cit.>). Observe that our framework includes quadratic coefficients. Alternatively, our results can be formulated in the framework of weak solutions as in <cit.>. As usual, we define the infinitesimal generator of the process X as ℒ_X = 1/2σ^2 (x) d^2/ d^2 x + μ(x) d/ dx. We denote the density of the scale function S(x) w.r.t the Lebesgue measure as S'(x)=exp(-∫^x 2μ(u)/σ^2(u) du ), and the density of the speed measure m(x) w.r.t the Lebesgue measure as m'(x)=2/σ^2(x)S'(x). As mentioned above, the underlying process is controlled by a pair of processes, the admissible controls, that drive it to a convenient region, defined below. An admissible control is a pair of non-negative {ℱ_t }-adapted processes (D,U)=({D_t}_t≥ 0,{U_t}_t≥ 0) such that: (i) Each process D,UΩ×ℝ_+ →ℝ_+ is right continuous and non decreasing almost surely. (ii) For each t≥ 0 the random variables U_t and D_t have finite expectation. (iii) For every x ∈ the stochastic differential equation dX^(D,U)_t:= μ (X^(D,U)_t)dt + σ(X^(D,U)_t)dW_t-dD_t +dU_t, X_0=x has a unique strong solution with no explosion in finite time. We denote by 𝒜 the set of admissible controls. Note that condition (iii) is satisfied, for instance, when the coefficients are globally Lipschitz. (See the remark after Theorem V.38 in <cit.>.) Observe also that condition (ii) is not a real restriction, as, for instance, the integral in the cost function G(x) in (<ref>) that we aim to minimize, in case of having infinite expectations, is infinite. A relevant sub-class of admissible controls is the set of reflecting controls. For a<b denote by X^a,b={ X^a,b_t t ≥ 0 } the strong solution of the stochastic differential equation with reflecting boundaries at a and b: dX^a,b_t= μ (X^a,b_t)dt + σ(X^a,b_t)dW_t-dD^b_t +dU^a_t, X_0=x. Here {D^b_t}, {U^a_t} are non-decreasing processes that increase, respectively, only when the solution visits a or b. As the above equation has a strong solution that does not explode in finite time (see <cit.>), the pair ({U^a_t}, {D^b_t}) belongs to 𝒜, and are called reflecting controls. If x ∉ (a,b) we define the policy by sending the process to a point inside the interval at time t=0. We introduce below the cost function to be considered with some natural assumptions on its behavior. Consider a continuous cost function c ℝ→ℝ_+ and two positive values q_u, q_d. Assume that c(x) ≥ c(0)≥ 0 ∀ x ∈ℝ, c(x) +K ≥α| x |. where K and α are positive constants. Define the maps π_1(x)=c(x)+q_dμ(x), π_2(x)=c(x)-q_u μ(x) and assume that: (i) There is a unique real number x_i^0 = {π_i(x) } so that π_i( · ) is decreasing on (-∞ ,x_i^0) and increasing on (x_i^0, ∞) for i=1,2. (ii) It hold that lim_x→∞π_1(x) = lim_x → -∞π_2(x) = ∞. We define the ergodic cost function as G(x)= inf_(D,U) ∈𝒜lim sup_T→∞1/T_x ( ∫_0^T c(X^D,U_s)ds +q_u U_T +q_d D_T ), where (D,U) is a pair of admissible controls in 𝒜. The existence of a unique pair of optimal controls within the class of reflecting controls was obtained by <cit.>, from where we borrow the notation and assumptions. More precisely the author proved the next two results: Assume <ref>. Then (a) If a<b then lim sup_T →∞1/T_x ( ∫_0^T c(X_s^a,b) ds + q_d dD^a_s+q_u dU^b_s ) =1/m(a,b)[ ∫_a^b c(u)m(du) +q_u/S'(a)+q_d/S'(b)]. (b) There is an unique pair of points a^∗<b^∗ that satisfy the equations: (i) π_1(b^∗) = π_2(a^∗), (ii) ∫_a^∗^b^∗( π_1(t)-π_1(b^∗) ) m(dt) + q_d+q_u/ S'(a^∗) =0. Furthermore, the pair (a^∗, b^∗) minimizes the expected long-run average cost within the class of reflecting policies. Condition (i) is obtained from the fact X^a,b is stationary. Regarding equation (ii), it arises after differentiation in order to determine the minimum. Uniqueness of the solution is proved based on the properties of the cost function. See the details in <cit.>. §.§ Optimality within 𝒜 Optimality within the class 𝒜 of càdlàg controls requires further analysis. As expected, and mentioned in <cit.>, the optimal controls within class 𝒜 are the same controls found in the class of reflecting controls. An analogous result to the one presented below was obtained for non-negative diffusions when considering a maximization problem in which q_u and q_d have opposite signs in <cit.> (see also <cit.>). More precisely, it is clear that inf_a<blim sup_T →∞1/T_x ( ∫_0^T c(X_s^a,b) ds +q_d dD^a_T +q_u dU^b_T ) ≥ G(x). Then, to establish the optimality within 𝒜 it is necessary to obtain the other inequality. This task is carried out with the help of the solution of the free boundary problem (<ref>) below, as done in <cit.>. The mentioned differences with this situation require different hypotheses and slightly different arguments. Consider a diffusion defined by (<ref>), a pair of controls (D,U)∈𝒜 and a cost function satisfying Assumptions <ref>. Suppose that there exist a constant λ≥ 0 and a function u ∈ C^2(ℝ) such that (ℒ_X u) (x)+c(x) ≥λ, -q_u ≤ u'(x) ≤ q_d , and lim inf_T →∞1/T| _x (u(X^D,U (T))) | =0, for all x ∈ℝ. Then, the ergodic cost function defined in (<ref>) satisfies G(x)≥λ. Fix T>0. For each n≥ 1 define the stopping times T_n= inf{ t ≥ 0 | X^D,U_t |≥ n }∧ T↗ T a.s. Using Itô formula for processes with jumps (observe that the diffusion X is continuous but the controls can have jumps, and in consequence the controlled processes X^D,U can have jumps), u(X^D,U(T_n)) = u(x)+ ∫_0^T_n u'(X^D,U_s- ) dX^D,U_s +1/2∫_0^T_nu”(X^D,U_s-)d ⟨ (X^D,U)^c,(X^D,U)^c ⟩ _s +∑_s ≤ T_n( u(X^D,U_s)-u(X^D,U_s-) -u'(X^D,U_s- ) △ X^D,U_s ). The r.h.s in (<ref>) can be rewritten as u(x) + ∫_0^T_n (ℒ_X u)(X^D,U_s-)ds -∫_0^T_nμ(X^D,U_s-)u'(X^D,U_s-) ds + ∫_0^T_nu'(X^D,U_s-) d X^D,U_s +∑_s ≤ T_n( u(X^D,U_s)-u(X^D,U_s-) -u'(X^D,U_s- ) △ X^D,U_s ). Using the fact that u'(X^D,U_s-)=u'(X^D,U_s) in a set of total Lebesgue measure in [0,T] almost surely, and that △ X^D,U_s= △ U_s-△ D_s, we rewrite (<ref>) as u(x) + ∫_0^T_n (ℒ_X u)(X^D,U_s-)ds + ∫_0^T_nu'(X^D,U_s-)σ(X^D,U_s-) d W_s + ∫_0^T_n u'(X^D,U_s-) d (U_s-D_s) +∑_s ≤ T_n( u(X^D,U_s)-u(X^D,U_s-) - u'(X^D,U_s-)(△ U_s -△ D_s) ). Therefore, denoting by U_s^c and D_s^c the continuous parts of the processes U_s and D_s respectively, and using the inequalities (<ref>) in the hypothesis, we obtain u(X^D,U (T_n)) ≥ u(x)+ λ T_n - ∫_0^T_nc(X^D,U_s-) ds +∫_0^T_n u(X^D,U_s-) σ (X^D,U_s-) dW_s -∫_0^T_nq_u dU^c_s -∫_0^T_n q_d dD^c_s - ∑_0 ≤ s ≤ T_n (△ D_s q_d + △ U_s q_u) =u(x)+ λ T_n - ∫_0^T_nc(X^D,U_s-) ds +∫_0^T_n u(X^D,U_s-) σ (X^D,U_s-) dW_s -q_u U_T_n-q_d D_T_n. Rearranging the terms above and taking the expectation we obtain _x (u(X^D,U (T_n))) -u(x)+ _x (∫_0^T_n c(X^D,U_s-)ds + q_u U_T_n+ q_d D_T_n) ≥λ_x (T_n). Taking first limit as n tends to infinity, dividing then by T, and finally taking lim inf as T goes to infinity we obtain (<ref>) concluding the proof of the verification theorem. Consideration of free boundary problems such as (<ref>) in the framework of singular control problems can be found for example in <cit.>, <cit.>, and <cit.>. In <cit.>, the author studied the same problem of this section and used a free boundary problem to find some useful properties of optimal controls within the class of reflecting barriers. More precisely, under the same assumptions as above, to study the ergodic optimal control problem in the class of reflecting controls, the author considered the free boundary problem consisting of finding a<b, λ and a function u in C^2(ℝ) such that { (ℒ_X u) (x) + c(x) =λ, for all x ∈ (a,b), u(x) =q_d (x-b) +u(b), for all x ≥ b, u(x) =q_u (a-x) +u(a), for all x ≤ a. . For this problem it is proved (see Remark 2.4 in <cit.>) that there exists a unique solution such that -q_d ≤ u'(x) ≤ q_u, (ℒ_X u) (x) + c(x)≥λ. Here, similar to <cit.>, we use the results obtained in <cit.> to get a suitable candidate to apply Theorem <ref>. Consider a diffusion defined by (<ref>) and a cost function satisfying Assumptions <ref>. Then, the reflecting controls with barriers given in (b) in Theorem <ref> minimize the ergodic cost in (<ref>) within the set 𝒜 of admissible controls. Take u as the solution of the free boundary problem (<ref>) defined above. In view of condition (<ref>), we need to prove that the infimum of the ergodic costs is reached in the set ℬ:= { (D,U) ∈𝒜, lim inf_T →∞1/T|_x(u(X^D,U_T))|=0 } . If (D,U) ∉ℬ then there exists constants L>0 and S>0 such that 1/s_x u(X^D,U_s)>L , for all s ≥ S. Using the fact that u is a linear function in (a,b)^c and that -q_u ≤ u'(x) ≤ q_d it follows that 1/s_x | X^D,U_s |>L/q_d+q_u -| u(0) |/s , for all s ≥ S. We conclude that lim sup_T→∞1/T_x ( ∫_0^T c(X^D,U_s)ds ) ≥lim sup_T→∞1/T∫_S^T _x (α| X^D,U_s | +K)ds ≥ lim sup_T→∞∫_S^T α (q_d+q_u)^-1L ds +(K-| u(0) |)(T-S)/T=∞. Thus in the set ℬ^c the infimum defined in (<ref>) cannot be reached because if we take ℬ^c instead of 𝒜 in that equation then G(x)= ∞ for all x ∈ℝ, concluding the proof. § MEAN FIELD GAME PROBLEM As usual in the mean-field formulation, it is assumed that the cost function C depends on two variables, i.e. C=C(x,μ). The first variable represents the state of the player x, and the second one is a probability measure μ that represents the aggregate of the rest of the players, referred as the market. For this kind of problem, there is a fixed measure valued function μ={μ_t} and the player tries to choose the best control α={α_t} belonging to a set 𝒞 of stochastic processes (the set of admissible controls), in order to minimize a target function J^μ_α, that represents the average cost in the long run. It is common to solve these types of problems with the help of fixed point arguments (see for example <cit.> and <cit.>) applied to a map of the form μ→ℒ(X^μ_α) with ℒ(X^μ_α) the law of the state process controlled by α such that α∈_β∈𝒞 J^μ_β . §.§ Market evolution In the case considered in this paper, we assume that the aggregate of players is modeled by a controlled diffusion Y^(D,U)={Y^(D,U)_t t≥ 0}. The definitions of Y^(D,U) and Y^c,d are analogous to the definitions of X^(D,U) and X^a,b respectively, in particular the coefficients of the respective stochastic differential equations and the class of admissible controls are the same. We further assume that the driving processes { Y_t t≥ 0 } and { X_t t≥ 0 } are independent. Comparing this approach with the general formulation, the measure μ_t is the distribution of the random variable Y^D,U_t that enters the cost function through an expectation, i.e. C(x,μ)=c(x,∫_ f(y)dμ), where f is a measurable function. We can then formulate this situation through the cost c(x,y) that depends on an pair (x,y) ∈ℝ^2. This allows us to use a more direct approach to find equilibrium points. More precisely, if the expectation of the market diffusion _y(f(Y^D,U_t)) has an ergodic limit ℓ=_y(f(Y^D,U_∞)), applying the previous results, we know that the optimal controls for X should be found in the class of reflecting controls (considering a one variable cost function of the form c(·,ℓ)). This is why we assume that the aggregate market diffusion is also controlled by reflection at some levels c<d, and expect to obtain an equilibrium point when the optimal levels a<b that control the player's diffusion coincide with c<d (see Def. <ref>). Note that the question of the existence of equilibrium strategies beyond the class of reflecting controls is not addressed here. As mentioned above, we consider a cost function of two variables, respectively the state of the player and the aggregate of players. The proposal to study the existence and uniqueness of equilibrium points begins with the application of Theorem <ref> when the aggregate state is constant. The cost function becomes one-dimensional and the results in <cit.> can be applied. The requirements to apply these results in the mean field game formulation follow. Assume that c ℝ^2 →ℝ_+ is a continuous function, and the positive constants q_u, q_d are the cost of the controls. Assume that, for each fixed y ∈ℝ there exist a value x_y and positive constants K_y and α_y such that c(x,y) ≥ c(x_y,y)≥ 0, for all x ∈, c(x,y)+K_y ≥α_y | x |, for all x ∈. Define the maps π_1(x,y):=c(x,y)+q_dμ(x), π_2(x,y)=c(x,y)-q_u μ(x), and assume that for each fixed y ∈ℝ (i) There exists a unique real number x^y_i = argmin{π_i(x,y) x∈} so that π_i( · ,y) is decreasing on (-∞ ,x^y_i) and increasing on (x^y_i, ∞), where i=1,2. (ii) The following limits hold: lim_x→∞π_1(x,y) = lim_x → -∞π_2(x,y) = ∞. From a financial viewpoint, the cost function c could be interpreted as a map measuring the cost incurred from operating with an individual stock and _x(f(X_s^a,b)) is the effect of the market at time s. §.§ Conditions for optimality and equilibrium In this setting, we can generalize the results of the section before using some simple ergodic results for diffusions. We say that (a^*,b^*), a^* ≤ b^* is an equilibrium point if (D^b^∗,U^a^∗) belongs to the set _(D,U) ∈𝒜{lim sup_T →∞1/T_x ( ∫_0^T c(X_s^D,U,_x(f(Y_s^a^*,b^*))) ds+ q_d dD_s+q_u dU_s )}. Consider the points a<b, c<d, and x ∈ℝ. Then lim sup_T →∞1/T_x ( ∫_0^T c(X_s^a,b,_x(f(Y_s^c,d))) ds+q_d dD^b_s+q_u dU_s^a ) =1/m(a,b)[ ∫_a^b c(u,R(c,d))m(du) + q_u/S'(a)+q_d/S'(b)], where R(c,d)= ∫_c^df(u)/m(c,d)m(du). Applying Theorem <ref> with the cost function c(·,R(c,d)) we obtain that lim_T →∞1/T_x (∫_0^T c(X_s^a,b,R(c,d))ds+q_d dD^b_s+q_u dU_s^a) =1/m(a,b)[∫_a^b c(u,R(c,d))m(du) + q_u/S'(a)+q_d/S'(b)], i.e. the r.h.s. in (<ref>). It remains then to verify that lim sup_T →∞1/T_x ( ∫_0^T | c(X_s^a,b,_x(f(X_s^c,d)))-c(X_s^a,b,R(c,d))| ds) =0. In order to do this, define the continuous function H[c,b] →ℝ^+ by H(y)= max_u ∈ [a,b]| c(u,y)-c(u,R(c,d)) |, and observe that the limit in (<ref>) can be bounded by lim sup_T →∞1/T_x ( ∫_0^T H(_x(f(Y_s^c,d))) ds)= lim sup_T →∞1/T∫_0^T H(_x(f(Y_s^c,d))) ds =lim sup_T →∞1/T∫_0^T H ( ∫_(c,d) f(y) d_s (x,dy)) ds , with _s(x, dy):= _x(Y^c,d_s ∈ dy). This limit is zero because H ( _(c,d) f(y) d_s (x,dy))→ H(R(c,d))=0, because H is uniformly continuous, bounded and ‖_s(x, ·)- m(·) 1/m(c,d)‖→ 0, as s →∞, with the norm of total variation (see Theorem 54.5 in <cit.>). We conclude that (<ref>) holds, concluding the proof. The existence and uniqueness of minimizers given in (b) in Theorem <ref> can also be generalized, by noticing that in Theorem <ref> the second variable in the cost function is fixed. The optimality of refecting controls within the class of càdlàg controls corresponding to Def. <ref> follows from Theorem <ref>. For a fixed (a,b), the infimum of the ergodic problem is reached only at a pair (a^∗,b^∗) such that (i) π_1(b^∗,R(a,b))= π_2(a^∗,R(a,b)), (ii) ∫_a^∗^b^∗( π_1(t,R(a,b))-π_1(b^∗,R(a,b)) ) m(dt) + q_d+q_u/ S'(a^∗)=0. Based on this result we obtain a condition for equilibrium of the mean-field game (see Definition <ref>). A pair (a,b), a<b, is an equilibrium point if and only if (i) π_1(b,R(a,b))= π_2(a,R(a,b)), (ii) ∫_a^b [ π_1(t,R(a,b))-π_1(b,R(a,b)) ] m(dt) + q_d+q_u/ S'(a)=0. §.§ The multiplicative case In this subsection we assume there are a pair of positive function h continuous and g convex, both with minimum at zero, such that c(x,y)= g(x)h(y). Note that such a multiplicative decomposition is particularly natural when g(x) is interpreted as a standardized representation of the units of a good corresponding to state x and h(y) as the factor modeling the unit costs based in the market. We give a first result that follows from Theorem <ref> if the cost function is multiplicative. From Theorem <ref>, using condition (i), one of the variables can be obtained as a function of the other. For this purpose, consider the set C_a = { b > aπ_1 (b,R(a,b))=π_2(a,R(a,b)) } . For the problem not to degenerate, we add the condition C_a ≠∅⇔ a ≤ 0. This means that we search for the equilibrium points in a connected set. Furthermore * for a fixed a≤ 0 we denote the infimum of the set (<ref>) as ρ(a). * For simplicity we denote L(a)=R(a,ρ(a)). Assume that the hypothesis (<ref>) holds, c(x,y)=g(x)h(y). Suppose there is a point a_0 ≤ 0 such that the function ρ defined via (<ref>) is continuous in (-∞, a_0], then: (C_1) If ∫_a_0^ρ(a_0) (π_1 (t,L(a_0))-π_1(ρ(a_0),L(a_0)))m(dt) +q_d+q_u/S'(a_0)≥ 0, then there is at least one equilibrium point. (C_2)Furthermore if in (-∞, a_0], π_2 (t,L(a_2))- π_2(a_2,L(a_2)) < π_2 (t,L(a_1))- π_2(a_1,L(a_1)) ∀ (a_2,a_1,t) s.t, a_2<a_1<t ≤ a_0, π_1(t,L(a_2) )- π_1(ρ(a_2),L(a_2)) < π_1(t,L(a_1) )- π_1(ρ(a_1),L(a_1)) ∀ (a_2,a_1,t) s.t. ρ(a_2) >ρ(a_1) > t ≥ a_0, and ∫_r^l (π_1 (t,R(r,l))-π_1(l,R(r,l)))m(dt)+ q_d+q_u/S'(r) >0, ∀ r ∈ (a_0,ρ(a_0)), l >r , π_1 (l,R(r,l))=π_2(r,R(r,l)), then the equilibrium is unique. For the existence of equilibrium points we need to prove ∫_A^ρ(A) (π_1 (t,L(A))-π_1(ρ(A),L(A)))m(dt) +q_d+q_u/S'(A) < 0, for some A<a_0. First, observe that, the inequality can be rewritten as: ∫_A^0 (π_2 (t,L(A))-π_2(A,L(A)))m(dt) +∫_0^ρ(A) (π_1 (t,L(A))-π_1(ρ(A),L(A)))m(dt) +q_d+q_u/S'(0)<0. Furthermore, due to the nature of the multiplicative cost, the points x_i^y,i=1,2 defined in (<ref>) can be taken all equal to x_i^0 for each i respectively. Thus, for A negative enough, both integrands are always negative and tend to -∞ when A → -∞. Finally, for the uniqueness, condition (C_2) implies that the map defined in (-∞,a_0]: a →∫_a^ρ(a) (π_1 (t,L(a))-π_1(ρ(a),L(a)))m(dt) +q_d+q_u/S'(a) is monotone, thus concluding that the root of this map is unique. In the particular case of a diffusion without drift, the conditions of the previous proposition are satisfied under the following simple conditions. If the cost function c(x,y)=h(x)g(y), both h and g positive, h is unbounded, convex and with minimum at zero and the process has no drift then: (a) We have ρ(a)=h_|_(-∞,0)^-1(a), and there exists and equilibrium point. (b) If the function g(R(a,ρ(a))) is strictly decreasing in the half-negative line, the equilibrium is unique. The form of ρ follows from the fact that the drift vanishes. Taking a_0= 0 part (a) follows from the fact ρ(0)=0 and condition (C_1) above is fulfilled. Condition (C_2) is verified, the first two statements follow from the monotonicity of h and a → g(a,R(a)), the third integral condition is automatic as (a_0,ρ(a_0))=∅. § EXAMPLES We present below several examples where the equations of Theorem <ref> can be expressed more explicitly and solved numerically. To help the presentation, for each example, we plot in an (a,b) plane the implicit curves defined by these equations. To this end, we write equation (i) in Theorem <ref> as F(a,b)=π_1(a,R(a,b))-π_2(b,R(a,b))=0, and draw in red the set of its solutions. We also draw in purple the set determined by condition (ii). Note that there are cases where there is an intersection of the red and purple curve outside the set { a < b }, these points are of no interest for our problem. In all examples the function affecting the expectation of the market is f(x)=x. Also for ease of exposition we present the conclusions and the plots and defer the computations to the Appendix (see Subsection <ref>). §.§ Examples with multiplicative cost The cost function now has the form c(x,y)= max (-λ x,x) (1+ | y |^β), λ >0, β≥ 1. and q_d λ = q_u. In this scenario the value max (-λ x, x) could represent the maintenance cost of certain property done by a third party. This third party will change the price of its services depending on the demand of the market. We consider a mean reverting process that follows the stochastic differential equation dX_t= -θ X_t dt+ σ(X_t) dW_t, such that σ is a function that satisfies the conditions of Section <ref>. In the particular case when σ is constant, we can compute R(a,b)= √(σ^2 /θπ)( e^-a^2 θ/σ^2 -e^-b^2 θ/σ^2/erf(√(θ/σ^2) b) - erf( √(θ/σ^2) a ) ). As an application of Proposition <ref>, under the condition q_d θ < 1 ∧ (λ^-1), existence of equilibrium points holds. Furthermore, if σ is even then uniqueness also holds. Again, the calculations are in Appendix <ref>. In the graphical examples below σ is a constant s. §.§ “Follow the market" examples The idea is to introduce a cost function in such a way that the player has incentives to follow the market evolution. The cost function is then c(x,y)= | x - y |. §.§.§ Brownian motion with negative drift In this case, the driving process is X_t=μ t+W_t, where {W_t}_t≥ 0 is a standard Brownian motion and μ<0. The problem can be reduced to a one variable problem. The conclusions are: * If there is a positive constant C such that C(1+e^2 μ C)(1-e^2 μ C)^-1+(q_d+q_u)μ + μ^-1 =0, ( C/ e^2 μ C-1 ) 2 e^2 μ C/μ + -2e^2 μ C+2C μ +1/ 2 μ^2 +q_d +q_u =0, then every point of the set { (a,a+C), a ∈ℝ} is an equilibrium point. * Otherwise there are no equilibrium points. The details can be found in the Appendix <ref> §.§.§ Ornstein Uhlenbeck process In this case, the process X_t follows the stochastic differential equation dX_t= -θ X_t dt+ σ dW_t, We analyze the symmetric case when q:=q_d=q_u. The existence of equilibrium points will hold, but uniqueness not necessarily. Essentially, the equation π_1(a,R(a,b))=π_2(a,R(a,b)) is satisfied when a=-b by symmetry, so similar arguments as the ones in the multiplicative case hold. However the line a+b=0 is not the only set where π_1(a,R(a,b))=π_2(a,R(a,b)) holds. § APPROXIMATION OF THE SYMMETRIC N-PLAYER MEAN FIELD GAME §.§ Introduction and notations In this section, we present an approximation result for ergodic mean-field games with N-players and the mean-filed game considered above, when the number of players N tends to infinity. More precisely, we establish that an equilibrium point of the mean-field game defined in (<ref>) is an ϵ-Nash equilibrium of the corresponding N-player game (see Def. <ref>). These approximation results are now a classical issue in mean field games, and have been studied for instance in <cit.> and <cit.> and the references therein. In order to formulate our result we need the following notations. (i) A filtered probability space (Ω, ℱ, {ℱ_t },P) that satisfies the usual conditions, where all the processes are defined. (ii) Adapted independent Brownian motions W,{W^i}_i=1,2,…, with the corresponding processes X,{X^i}_i=1,2,… each of one satisfy equation (<ref>) driven by the respective W or W^i. (iii) The set of admissible controls 𝒜 of Definition <ref>, that in particular assumes, given an admissible control (D^i,U^i), the existence of the controlled process as a solution of dX^(D^i,U^i)_t= μ(X^(D^i,U^i)_t)dt + σ (X^(D^i,U^i)_t)dW_t -dD^i_t+dU^i_t, X_0=x, for each i=1,2,… For simplicity and coherence we denote by X^i,(a,b) the solution to (<ref>) when the i-th player chooses reflecting strategies within a<b. As usual, we define the vector of admissible controls Λ=((D^1,U^1), … , (D^N,U^N)) such that (D^i,U^i) is an admissible control selected by the player i in a game of N players. Furthermore, we define Λ^-i:=((D^1,U^1), …, (D^i-1,U^i-1), (D^i+1,U^i+1), … , (D^N,U^N)) and V_N^i((D^i,U^i), Λ^-i)(x):= lim sup_T →∞1/T_x {∫_0^T c [X_s^(D^i,U^i),1/N-1∑_j ≠ i^N f(X_s^(D^j,U^j)) ] ds +q_u dU^i_s+q_d dD^i_s } for a cost function c(x,y) satisfying Assumption <ref> and a real function f(x), that we assume continuous in this section. For a fixed ϵ>0 and a fixed N ∈ℕ, a vector of admissible controls Λ=((D^1,U^1),… , (D^N,U^N)) is called an ϵ-Nash equilibrium if for every admissible control η=(D^i,U^i): V_N^i((D^i,U^i), Λ^-i)(x) ≤ϵ+ V_N( η, Λ^-i ) (x), ∀ i=1, … , N, ∀ x ∈ℝ. §.§ Approximation to the ergodic mean-field problem We are ready to prove that the equilibrium points of the mean-field game are ϵ-equilibrium points for the N-player game. It is interesting to notice that the proofs are quite simple due to the fact that the controlled process is bounded. Assume that there is a convex increasing function Fℝ→ℝ_+ and a continuous function gℝ→ℝ_+ such that the cost function c(x,y) satisfies | c(x,y_1)-c(x,y_2) |≤ F( | x |) | g( y_1) -g(y_2) |. Suppose that the point (D^b,U^a) is an equilibrium for the problem defined in (<ref>) and there is a set 𝒞⊂𝒜 such that the control (D^b,U^a) ∈𝒞 and the following two conditions hold: sup_η∈𝒞lim sup_T →∞1/T_x (∫_0^T F(X_s^η) ds ) < ∞ ∀ x ∈ℝ, inf_η∈ℬ V_N^i ( η, Λ^-i) (x)=inf_η∈𝒞 V_N^i ( η, Λ^-i) (x), ∀ N ∈ℕ, x ∈ℝ, where Λ:=((D^b,U^a),(D^b,U^a) … , (D^b,U^a)). Then, the vector of controls Λ is an ϵ_N-Nash equilibrium for the N-player game, with ϵ_N → 0 when N →∞. Note that for the values of the parameters λ>0, β>0 and α≥ 1, the cost functions (x,y) →{ -λ x, x }^α(1+ | y |^β), (x,y) →|β x - y | satisfy condition (<ref>) (see Appendix <ref>). First of all define the function V𝒞×{ (a,b); a<b }×→ℝ via V((D,U),(b,a))(x) = lim sup_T →∞1/T_x [∫_0^T c(X_s^D,U,_x(f(X^a,b_s))) ds +q_u dU_T+q_d dD_T ]. From now on, as x is fixed along the proof, we omit it. We only need to work with controls in the set 𝒞. For a fixed N define Λ:=((D^b,U^a),(D^b,U^a), … , (D^b,U^a)), consider η∈𝒞 and observe that V_N^i( Λ) = V_N^i(η,Λ^-i)+(V_N^i(Λ)-V(η,(b,a)))+(V(η,(b,a))-V_N^i(η,Λ^-i) ) ≤ V_N^i(η,Λ^-i)+(V_N^i(Λ)-V((D^b,U^a),(b,a)))+(V(η,(b,a))-V_N^i(η,Λ^-i) ). Therefore it is enough to prove that sup_η∈𝒞| V_N^i(η, Λ^-i)-V(η,(b,a))|→ 0, when N →∞, as the other term | V_N^i (Λ)- V((D^b,U^a),(b,a)) | is a particular case of (<ref>) when η=(D^b,U^a). We proceed to prove (<ref>): lim sup_T →∞1/T_x {∫_0^T [ c ( X_s^η,( 1/N-1∑_j ≠ i^N f( X_s^j,(a,b)) ) ) -c(X_s^η,_x (f(X_s^a,b)) ) ] ds } ≤lim sup_ T →∞_x {∫_0^T [ 1/T F( X_s^i,η )| g ( _x( f(X_s^a,b)) ) -g ( 1/N-1∑_j ≠ i^N f(X_s^j,(a,b)) ) |] ds } =lim sup_ T →∞∫_0^T {1/T_x [F( X_s^i,η )| g ( _x (f(X_s^a,b))) -g ( 1/N-1∑_j ≠ i^N f(X_s^j,(a,b)) ) |] ds } . Define K:= max_x ∈ [a,b]g(x) and M:= sup_s ∈ℝ^+, x ∈ℝ{√(var_x(f(X_s^(a,b))))} (var_x is the variance of the process started at x), let ϵ>0 be arbitrary, δ>0 such that | g(y_1)-g(y_2) | <ϵ if | y_1-y_2|< δ M for all y_1,y_2 ∈ [a,b] and N ∈ℕ. Invoking Chebyshev's inequality: P (|_x(f(X_s^a,b))- 1/N-1∑_j ≠ i^N f(X_s^j,(a,b)) |≥ M δ) ≤1/δ^2 (N-1) . Therefore the term (<ref>) is smaller or equal than lim sup_ T →∞( K/δ^2 (N-1) + ϵ) ( ∫_0^T 1/T_x ( F( X_s^i,η ) )ds ). The integral is uniformly bounded by assumption (<ref>), concluding that the limit is zero because ϵ is arbitrary. §.§ Statements and Declarations The second and third authors are supported by CSIC - Proyecto grupos nr. 22620220100043UD, Universidad de la República, Uruguay. § APPENDIX §.§ Approximation for the cost in examples First of all notice that | | x -y_1 |- | x -y_2 | |≤| y_1-y_2 | Therefore for the cost (x,y) →| x -y | we take F≡ 1 and conclude that the function is trivially on the hypothesis of Theorem <ref>. For the multiplicative case suppose there is an equilibrium at (a,b) for the mean-field game for a cost function c(x,y)=f(x)g(y) with both f,g non negative, continuous, convex with minimum at zero with g(0)=1. We take F≡ f. Consider Λ as in (<ref>) (it depends on N) and assume there is a sequence { N_K }⊂ℕ and a sequence {ϵ_K } , ϵ_K → 0, when K →∞ such that if a control η_K ∈ℬ satisfies inf_η∈ℬ V_N_K^i (η, Λ^-i)+ ϵ_K > V_N_K^i (η_K ,Λ^-i) then lim sup_T →∞1/T_x ( ∫_0^T f(X_s^η,i ) ds ) >K. With that control we get V_N_K^i (η_r ,Λ^-i) = lim sup_T →∞1/T_x ( ∫_0^T f(X_s^η_K)g (1/N-1∑_j ≠ i^N X_s^i,(a,b)) +η_1,K(s) q_d +η_2,K(s) q_u ds ) >K_r . Therefore the set {inf_η∈ℬV_N^i (η ,Λ^-i) , N ∈ℕ} is not bounded above. This is absurd because g restricted in [a,b] is bounded and is the only term that depends on N. §.§ Calculations of the examples §.§.§ Absolute value, Brownian motion with negative drift In this case (see <cit.>), S'(x)=exp (-2μ x), m'(x)=2e^2μ x. Therefore R(a,b) =∫_a^b 2ue^2μ u du/∫_a^b 2e^2μ u du = b e^2 μ b-ae^2 μ a/ e^2 μ b-e^2 μ a -1/2μ . The cost function is c(x,y)= | x - y | . We proceed to analyze the function ρ. The notations are the same as Proposition <ref>. The equation π_2(a,R(a,b))=π_1(b,R(a,b)) is equivalent to F(a,b)=(b+ a) +1/μ -2 ( be^2μ b-ae^2μ a/e^2μ b-e^2μ a) +μ (q_u+q_d)=0. On one hand, when a<0 the equation F(a,b)=0 has a solution b>0 because a+ μ^-1+μ(q_d+q_u)<0 On the other hand when a ≥ 0, the equation F(a,b)=0 also has a root because b-2R(a,b) → -∞ when b →∞. We compute the partial derivative ∂ F/∂ b (a,b)= -1 (e^2μ (b-a)) ( 2μ (a-b)-1+e^2 μ (b-a))/ (1-e^2μ (b-a))^2 > 0 . We deduce that the function ρ is well defined in all ℝ and the roots of F(a,b) are unique for each a. Furthermore if C is the positive constant that satisfies the equality C(1+e^2 μ C)(1-e^2 μ C)^-1=-(q_d+q_u) μ - μ^-1, then F(a,a+C)=0. So ρ(a)= a+C. From Theorem <ref> we know the equilibrium points (a,b) must satisfy the equality: ∫_a^b (| t- R(a,b) | -b + R(a,b) )2e^2μ tdt + (q_d+q_u)e^2 μ a=0 . More explicity: ∫_a^b(| t+1/2μ - be^2μ b-ae^2 μ a/ e^2 μ b -e^2μ a| -b-1/2 μ + be^2μ b-ae^2 μ a/ e^2 μ b -e^2μ a)2e^2μ tdt + (q_d+q_u)e^2 μ a=0 With the change of variable u=t-b the equality (<ref>) is equivalent to: ∫_a-b^0(| u- (b-a)/ e^2μ(b-a)-1 +1/2μ| + (b-a) / e^2 μ (b-a) -1 -1/2μ) 2e^2μ (u+b-a)du e^2 μ a + (q_d+q_u)e^2 μ a=0 . Therefore if there is a point (A,B) that satisfies (<ref>) then every point (a,b) such that b-a=B-A also satisfies (<ref>). To solve the integral define C b-a and K C (exp(2 μ C)-1)^-1-(2μ)^-1 so the integral in (<ref>) becomes ∫_-C^0 ( | u-K| + K) 2e^2 μ (C+u)du= 2e^2 μ K∫_-C-K^-K| r | e^2 μ (C+r)dr + Ke^2 μ C1-e^-2 μ C /μ= 2/4 μ^2e^2 μ (K+C)( e^2 μ r(2μ r -1) |^0_-C-K-e^2 μ r(2 μ r -1)|^-K_0 ) + Ke^2 μ C1-e^-2 μ C /μ= 1/2 μ^2e^2 μ C(-e^-2 μ C (2μ (-C-K)-1) -(-2 μ K -1)) + Ke^2 μ C1-e^-2 μ C /μ= e^2 μ C( 4 μ K +1 / 2 μ^2 )+2C μ +1/2 μ^2 = ( C/ e^2 μ C-1 ) 2 e^2 μ C/μ + -2e^2 μ C+2C μ +1/ 2 μ^2. Solving the integral in (<ref>) we conclude that a point (a,b) is an equilibrium point iff C b-a satisfies C(1+e^2 μ C)(1-e^2 μ C)^-1+(q_d+q_u)μ + μ^-1=0 ( C/ e^2 μ C-1 ) 2 e^2 μ C/μ + -2e^2 μ C+2C μ +1/ 2 μ^2 +q_d +q_u=0 Using <ref> it can be shown that the value is be the same for all equilibrium points and it is -C exp(-2 μ C)(1-exp(-2 μ C))^-1-(2μ)^-1+q_d μ §.§.§ Multiplicative cost Proposition <ref> is used with A=0. We assume λ≥ 1 (in other cases symmetrical arguments can be used), c(x,y)= max(-λ x ,x ) (1+ | y |^β), q:= q_d=q_u λ^-1, β > 0 and q_d θ < 1 ∧λ^-1 so the function c is a cost function. The equality π_1(b,R(a,b))=π_2(a,R(a,b)) reads as -λ a =b . Furthermore if σ is an even function we deduce R(a,-λ a) is decreasing in a so: π_2 (t,L(a))- π_2(a,L(a))= (a-t)(1+R(a,-λ a)^β-q_d λθ) which decreases to -∞ in a. π_1 (t,L(a))- π_1(-λ a,L(a))= (t- λ a)(1+R(a,-λ a)^β-q_u θ) which decrease to -∞ implying that uniqueness and existence holds. In the case 1 ∧λ^-1≥θ q_d the function c(x,y) is not a cost function so existence and uniqueness is not guaranteed. [Alvarez(2018)]Alvarez Alvarez, L.H.R.: A Class of Solvable Stationary Singular Stochastic Control Problems. <https://doi.org/10.48550/arXiv.1803.03464> (2018) [Alvarez and Shepp(1998)]AS Alvarez, L. H. R., Shepp, L. A.: Optimal harvesting of stochastically fluctuating populations. Journal of Mathematical Biology 37, 155–177 (1998) [Asmussen and Taksar(1997)]AT Asmussen, S., Taksar, M.: Controlled diffusion models for optimal dividend pay-out. Insurance: Mathematics and Economics 20, 1–15 (1998) [Borodin and Salminen(2002)]BS Borodin, A., Salminen, P.: Handbook on Brownian motion - facts and formulae, 2nd ed. (2nd printing). Birkhäuser, Basel (2002) [Haoyang et al.(2021)]HDF Cao, H., Dianetti, J., Ferrari, G.: Stationary Discounted and Ergodic Mean Field Games of Singular Control. <https://doi.org/10.48550/arXiv.2105.07213> (2021) [Cao and Guo(2022)]CG Cao, H., Guo, X.: MFGs for partially reversible investment. Stochastic Processes and their Applications 150, 995–1014 (2002) [Carmona(2020)]Carmona Carmona, R.: Applications of Mean Field Games in Financial Engineering and Economic Theory. <https://doi.org/10.48550/arXiv.2012.05237> (2020) [Festa and Göttlich(2018)]ADRIANOFIESTA! Festa, A., Göttlich, S.: A Mean Field Game approach for multi-lane traffic management. IFAC-PapersOnLine 51(32), 793–798 (2018) [Hening, Nguyen, Ungureanu and Wong(2017)]HNUW Hening, A., Nguyen, D. H., Ungureanu, S. C., and Wong, T. K.: Asymptotic harvesting of populations in random environments. <https://doi.org/10.48550/arXiv.1710.01221> (2017) [Højgaard and Taksar(2001)]HT Højgaard, B., Taksar, M.: Optimal risk control for a large corporation in the presence of returns on investments. Finance and Stochastics 5, 527–547 (2001) [Huang(2013)]M.H(2013) Huang, M.: A Mean Field Capital Accumulation Game with HARA Utility. Dynamic Games and Applications 3, 446–472 (2013) [Huang et al.(2004)]M.C.R(2004) Huang, M., Caines, P. E., Malhame, R. P.: Large-population cost-coupled LQG problems: generalizations to non-uniform individuals. 2004 43rd IEEE Conference on Decision and Control, 3453–3458 (2004) [Jeanblanc-Picqué and Shiryaev(1995)]JS Jeanblanc-Picqué, M. and Shiryaev, A. N.: Optimization of the flow of dividends. Russian Math. Surveys 50, 257–277 (1995) [Kunwai et al.(2022)]KKXYZ Kunwai, K., Xi, F., Yin, G., Zhu, C.: On an Ergodic Two-Sided Singular Control Problem. Applied Mathematics and Optimization 86, 26 (2022) [Lande and Engen and Sæther(1994)]LESE Lande, R., Engen S., Sæther B. E.: Optimal harvesting, economic discounting and extinction risk in fluctuating populations. Nature, 372, 88–90 (1994) [Paulsen(2008)]P Paulsen, J.: Optimal dividend payments and reinvestments of diffusion processes with fixed and proportional costs. SIAM Journal on Control and Optimization 47, 2201–2226 (2008) [Peura and Keppo(2006)]BK Peura, S., Keppo, J. S.: Optimal Bank Capital with Costly Recapitalization. Journal of Business 79, 2163–2201 (2006) [Protter(2005)]ProtterPE Protter, P. E.: Stochastic integration and differential equations, 2nd. edition. Springer, Berlin Heildeberg (2005) [Rogers and Williams(2000)]RW Rogers, L.C.G., Williams, D.: Diffusions, Markov processes and Martingales, Volume 2: Itô Calculus, 2nd edition. Cambridge University Press, Cambridge (2000) [Saisho(1987)]saisho Saisho, Y.: Stochastic differential equations for multidimensional domain with reflecting boundary. Probab. Theory Related Fields 74 (1987), no. 3. [Shreve et al.(1984)]SLG Shreve, S., Lehoczky, J., Gaver, D.: Optimal consumption for general diffusion with absorbing and reflecting barriers. SIAM Journal on Control and Optimization 22, 55–75 (1984)
http://arxiv.org/abs/2306.06280v1
20230609221110
Matrices for finite group representations that respect Galois automorphisms
[ "David J. Benson" ]
math.RT
[ "math.RT", "20C15" ]
We are given a finite group H, an automorphism τ of H of order r, a Galois extension L/K of fields of characteristic zero with cyclic Galois group ⟨σ⟩ of order r, and an absolutely irreducible representation ρ H→(n,L) such that the action of τ on the character of ρ is the same as the action of σ. Then the following are equivalent. ∙ ρ is equivalent to a representation ρ' H→(n,L) such that the action of σ on the entries of the matrices corresponds to the action of τ on H, and ∙ the induced representation _H,H⋊⟨τ⟩(ρ) has Schur index one; that is, it is similar to a representation over K. As examples, we discuss a three dimensional irreducible representation of A_5 over [√(5)] and a four dimensional irreducible representation of the double cover of A_7 over [√(-7)]. Energy-Dissipative Evolutionary Deep Operator Neural Networks [ ============================================================= § INTRODUCTION This paper begins with the following question, suggested to the author by Richard Parker. The alternating group A_5 has a three dimensional representation over the field [√(5)] which induces up to the symmetric group S_5 to give a six dimensional irreducible that can be written over . Given an involution in S_5 that is not in A_5, is it possible to write down a 3× 3 matrix representation of A_5 such that the Galois automorphism of [√(5)] acts on matrices in the same way as the involution acts on A_5 by conjugation? More generally, we are given a finite group H, an automorphism τ of order r, a Galois extension L/K of fields of characteristic zero with cyclic Galois group (L/K)=⟨σ⟩ of order [L:K]=r, and an absolutely irreducible representation ρ H→(n,L). We assume that the action of τ on the character of the representation ρ is the same as the action of σ. Then the question is whether it is possible to conjugate to a representation ρ' H→(n,L) with the property that the Galois automorphism σ acts on matrices in the same way as τ acts on H. In other words, we are asking whether the following diagram can be made to commute. H[r]^(0.4)ρ'[d]_τ (n,L)[d]^σ H[r]^(0.4)ρ' (n,L) We answer this using the invariant λ(ρ) in the relative Brauer group (L/K)=H^2(⟨σ⟩,L^×)≅ K^×/N_L/K(L^×) that defines the division algebra associated to the representation obtained by inducing to the semidirect product. Let ρ H →(n,L) be as above. Then there is an invariant λ(ρ)∈ K^×/N_L/K(L^×) such that the following are equivalent. * λ(ρ)=1, * There is a conjugate ρ' of ρ making the diagram above commute, * If G is the semidirect product H⋊⟨τ⟩ then the induced representation _H,G(ρ) has Schur index equal to one; in other words, it can be written over K. More generally, the order of λ(ρ) in K^×/N_L/K(L^×) is equal to the Schur index of the induced representation, and the associated division algebra is the one determined by λ(ρ). The equivalence of (1) and (2) is proved in Section <ref>. The equivalence of (1) and (3) is more standard, see for example Turull <cit.>, and is proved in Section <ref>. Combining these gives the more interesting statement of the equivalence of (2) and (3). We end with some examples. In the case of the three dimensional representations of A_5, we have λ(ρ)=1, and we write down explicit matrices for ρ', though they're not very pleasant. In the case of the four dimensional irreducible representations of 2A_7, we have λ(ρ)=-2, which is not a norm from [√(-7)], and the division ring associated to the induced representation is the quaternion algebra with symbol (-2,-7)_. Acknowledgement. I would like to thank Richard Parker for suggesting this problem and the examples, and Alexandre Turull for some helpful comments. § THE MATRIX X Consider the composite σ∘ρ∘τ^-1: H H (n,L) (n,L). This representation is equivalent to ρ, and so there exists a matrix X, well defined up to scalars in L^×, such that conjugation by X takes ρ to σ∘ρ∘τ^-1. Write c_X for conjugation by X, so that c_X(A)=XAX^-1. Then we have σ∘ρ∘τ^-1 = c_X ∘ρ. By abuse of notation, we shall also write σ for the automorphism of (n,L) given by applying σ to each of its entries. Then c_σ(X)(σ(A))=σ(X)σ(A)σ(X)^-1=σ(XAX^-1), so we have c_σ(X)∘σ = σ∘ c_X. So equation (<ref>) gives σ^2∘ρ∘τ^-2 =σ∘ c_X∘ρ∘τ^-1 =c_σ(X)∘σ∘ρ∘τ^-1 =c_σ(X)∘ c_X ∘ρ =c_σ(X).X∘ρ. Continuing this way, for any i>0 we have σ^i∘ρ∘τ^-i = c_σ^i-1(X)⋯σ(X).X∘ρ. Taking i=r, we have σ^r=1 and τ^r=1, so ρ=c_σ^r-1(X)⋯σ(X).X∘ρ. If A is an n× n matrix over L, we define the norm of A to be N_L/K(A)=σ^r-1(A)⋯σ(A).A as an n× n matrix over K. Equation (<ref>) now reads ρ=c_N_L/K(X)∘ρ. By Schur's lemma, it follows that the matrix N_L/K(X) is a scalar multiple of the identity, N_L/K(X)=λ I. Applying σ and rotating the terms on the left, we see that λ=σ(λ), so that λ∈ K^×. If we replace X by a scalar multiple μ X, then the scalar λ gets multiplied by σ^r-1(μ)⋯σ(μ)μ, which is the norm N_L/K(μ). Thus the scalar λ is well defined only up to norms of elements in L^×. We define it to be the λ-invariant of ρ: λ(ρ)∈ K^×/N_L/K(L^×). Thus λ(ρ)=1 if and only if X can be replaced by a multiple of X to make N_L/K(X)=I. § THE MATRIX Y The goal is to find a matrix Y conjugating ρ to a representation ρ' such that σ∘ρ'∘τ^-1=ρ'. Thus we wish Y to satisfy σ∘ c_Y∘ρ∘τ^-1=c_Y∘ρ. We rewrite this in stages: c_σ(Y)∘σ∘ρ∘τ^-1 =c_Y∘ρ σ∘ρ∘τ^-1 =c_σ(Y)^-1∘ c_Y∘ρ c_X∘ρ =c_σ(Y)^-1Y∘ρ. Again applying Schur's lemma, σ(Y)^-1Y is then forced to be a multiple of X. Since N_L/K(σ(Y)^-1Y)=I, it follows that if there is such a Y then λ(ρ) is the identity element of K^×/N_L/K(L^×). This proves one direction of Theorem <ref>. The other direction is now an immediate consequence of the version of Hilbert's Theorem 90 given in Chapter X, Proposition 3 of Serre <cit.>: Let L/K be a finite Galois extension with Galois group (L/K). Then H^1((L/K),(n,L))=0. Let L/K be a Galois extension with cyclic Galois group (L/K)=⟨σ⟩ of order r. If a matrix X∈(n,L) satisfies N_L/K(X)=I then there is a matrix Y such that σ(Y)^-1Y=X. This is the case of a cyclic Galois group of Theorem <ref>. This completes the proof of the equivalence of (1) and (2) in Theorem <ref>. § THE INDUCED REPRESENTATION Let G=H⋊⟨τ⟩, so that for h∈ H we have τ(h)=τ hτ^-1 in G. Then the induced representation _H^G(ρ) is an LG-module with character values in K, but cannot necessarily be written as an extension to L of a KG-module. So we restrict the coefficients to K and examine the endomorphism ring. _KG(_H,G(ρ|_K)) has dimension r^2 over K. The representation ρ|_K is an irreducible KH-module, whose extension to L decomposes as the sum of the Galois conjugates of ρ, so _KH(ρ|_K) is r dimensional over K. For the induced representation _H,G(ρ|_K)=_H,G(ρ)|_K, as vector spaces we then have _KG(_H,G(ρ|_K))≅_KH(ρ|_K,_G,H_H,G(ρ|_K))≅ r._KH(ρ|_K). The algebra _KG(_H,G(ρ|_K)) is a crossed product algebra, central simple over K, with generators m_λ for λ∈ L and an element ξ, satisfying m_λ + m_λ'=m_λ+λ', m_λ m_λ'=m_λλ', m_λ∘ξ=ξ∘ m_σ(λ), ξ^r=m_λ(ρ). We can write the representation _H,G(ρ|_K) in terms of matrices as follows. g ↦[ ρ(g)|_K; σρτ^-1(g)|_K; ⋱; σ^-1ρτ(g)|_K ], τ↦[ I; I; ⋱; I ]∘σ It is easy to check that the following are endomorphisms of this representation. m_λ=[ λ I; σ(λ) I; ⋱; σ^-1(λ) I ], ξ =[ σ^-1(X); X; σ(X); ⋱; σ^-2(X) ] with λ∈ L and X as in Section <ref>. Since these generate an algebra of dimension r^2 over K, by Lemma <ref> they generate the algebra _KG(_H,G(ρ|_K)). The given relations are easy to check, and present an algebra which is easy to see has dimension at most r^2, and therefore no further relations are necessary. The Schur index of the induced representation _H,G(ρ) is equal to the order of λ(ρ) as an element of K^×/N_L/K(L^×). In particular, the Schur index is one if and only if λ(ρ)=1 as an element of K^×/N_L/K(L^×). This follows from the structure of the central simple algebra _KG(ρ|_K) given in Proposition <ref>, using the theory of cyclic crossed product algebras, as developed for example in Section 15.1 of Pierce <cit.>, particularly Proposition b of that section. This completes the proof of the equivalence of (1) and (3) in Theorem <ref>. In particular, it shows that λ(ρ) can only involve primes dividing the order of G. § EXAMPLES Our first example is a three dimensional representation of A_5. There are two algebraically conjugate three dimensional irreducible representations of A_5 over [√(5)] swapped by an outer automorphism of A_5, and giving a six dimensional representation of the symmetric group S_5 over . Setting α=1+√(5)/2, α̅=1-√(5)/2, we can write the action of the generators on one of these three dimensional representations as follows. (12)(34)↦[ -1 0 0; 0 0 1; 0 1 0 ] (153)↦[ -1 1 α; α 0 -α; -α 0 1 ] Taking this for ρ, we find a matrix X conjugating this to σ∘ρ∘τ^-1 where σ is the field automorphism and τ is conjugation by (12). Using the fact that if a=(12)(34) and b=(153) then ab^2abab^2=(253), we find that X=[ 1 -α̅ α̅; -α̅ 1 -α̅; α̅ -α̅ 1 ] We compute that σ(X).X is minus the identity. Now -1 is in the image of N_[√(5)],, namely we have (2-√(5))(2+√(5))=-1. So we replace X by (2-√(5))X to achieve σ(X).X=I. Having done this, by Hilbert 90 there exists Y with σ(Y)^-1.Y=X. Such a Y conjugates ρ to the desired form. For example we can take Y = [ 1-2√(5) 3-2√(5) -3+2√(5); 3-2√(5) 1-2√(5) 3-2√(5); -3+2√(5) 3-2√(5) 1-2√(5) ]. Thus we end up with the representation (12)(34) ↦[ -1 0 0; 0 0 1; 0 1 0 ] (153)↦1/40[ 10-4√(5) -5+19√(5) 25-9√(5); -10-4√(5) 25+9√(5) -5-19√(5); -50 35-5√(5) -35-5√(5) ] (253)↦1/40[ 10+4√(5) -5-19√(5) 25+9√(5); -10+4√(5) 25-9√(5) -5+19√(5); -50 35+5√(5) -35+5√(5) ]. Denoting these matrices by a, b and c, it is routine to check that a^2=b^3=(ab)^5=1, a^2=c^3=(ac)^5=1, and c=σ(b)=ab^2abab^2. More generally, if H is an alternating group A_n and G is the corresponding symmetric group S_n then all irreducible representations of G are rational and so the invariant λ(ρ) is equal to one for any irreducible character of H that is not rational. So an appropriate matrix Y may always be found in this case. Our second example is one with λ(ρ) 1. Let H be the group 2A_7, namely a non-trivial central extension of A_7 by a cyclic group of order two. Let τ be an automorphism of H of order two, lifting the action of a transposition in S_7 on H, and let G be the semidirect product H⋊⟨τ⟩. Then H has two Galois conjugate irreducible representations of dimension four over [√(-7)]. Let ρ be one of them. The induced representation is eight dimensional over [√(-7)]. Restricting coefficients to produces a 16 dimensional rational representation whose endomorphism algebra E is a quaternion algebra. Thus the induced representation can be written as a four dimensional representation over E^≅ E. This endomorphism algebra was computed by Turull <cit.> in general for the double covers of symmetric groups. In this case, by Corollary 5.7 of that paper, the algebra E is generated over by elements u and v satisfying u^2=-2, v^2=-7 and uv=-vu. Thus the invariant λ(ρ) is equal to -2 as an element of ^×/N_[√(-7)],([√(-7)]^×) in this case. amsplain
http://arxiv.org/abs/2306.08261v1
20230614055512
Strong regulatory graphs
[ "Patric Gustafsson", "Ion Petre" ]
cs.DM
[ "cs.DM", "q-bio.MN", "q-bio.QM" ]
Strong regulatory graphs Patric GustafssonThis work was initiated as part of Patric Gustafsson's master thesis project at Åbo Akademi University (ÅA) in 2019. Patric is no longer affiliated with ÅA. Department of Information Technologies, Åbo Akademi University, Finland [email protected] Ion Petre Department of Mathematics and Statistics, University of Turku, Finland National Institute for Research and Development for Biological Sciences, Romania [email protected] July 31, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================== P. Gustafsson, I. PetreStrong regulatory graphs Logical modeling is a powerful tool in biology, offering a system-level understanding of the complex interactions that govern biological processes. A gap that hinders the scalability of logical models is the need to specify the update function of every vertex in the network depending on the status of its predecessors. To address this, we introduce in this paper the concept of strong regulation, where a vertex is only updated to active/inactive if all its predecessors agree in their influences; otherwise, it is set to ambiguous. We explore the interplay between active, inactive, and ambiguous influences in a network. We discuss the existence of phenotype attractors in such networks, where the status of some of the variables is fixed to active/inactive, while the others can have an arbitrary status, including ambiguous. Biomodeling, interaction networks, regulatory graphs, Boolean networks, phenotype attractors. § INTRODUCTION Logical modeling is a well established mathematical approach to biology, offering a simple, intuitive way to understand complex systems. Biological networks can be represented as graphs and experimental data integrated in them to yield a system-level view of key regulators, dynamical patterns, and response to various changes. There are many successful applications of logical modeling such as the identification of drug targets and treatments <cit.>, modeling of regulatory networks <cit.>, of signaling pathways <cit.>, of cell growth and apoptosis <cit.>, applications in immunology <cit.>, and many others. A logical model is discrete (often Boolean), with its dynamics defined as the result of the interplay of the influences among its variables. Such influences are modeled as directed edges between the variables, sometimes with the annotation of the activation/inhibition nature of the interaction, see <cit.> for a review on Boolean modeling in systems biology. Constructing large-scale models has been made possible in recent years by the availability of data respositories such as KEGG (<cit.>), OmniPath (<cit.>), InnateDB (<cit.>), SIGNOR (<cit.>) and DrugBank (<cit.>). Using such resources has made it possible to build models consisting of thousands of variables and interactions <cit.>. Integrating such large scale data into logical or Boolean models is problematic because of the need to identify the update functions for each variable of the model. These functions specify the precise conditions under which a variable changes its status, e.g. between active and inactive, depending on the status of its predecessors/regulators in the network. The difficulty is that the data is almost never enough to identify the precise nature of these functions. Instead, the modeler often postulates the type of update functions in the network (propositional formulas (<cit.>), threshold conditions (<cit.>), multi-valued functions (<cit.>), etc.) The specifics of how to choose each update function are based on previous literature and to some extent, to the choice of the modeler, subject to experimentation with various options. The approach we take in this paper is motivated by the concept of controlling a network: choose a user intervention in the network (e.g., through fixing some variables to some constants), so that the model converges to a certain desired attractor. Network controllability has found interesting applications in biology, e.g. in drug repurposing <cit.>. Tracking the effect of the user interventions throughout the network is however difficult when the variables of the model are under conflicting influences, with some regulators pushing for their activation, while others for their inhibition. We propose in this paper a notion of strong regulatory graphs where the update of a node's status is determined to be active/inactive only if its predecessors concur in their influences. Otherwise it is set to be ambiguous, meaning that it could be both active and inactive, depending on the precise (numerical) setup of its predecessors and of their influences, which is in practice very difficult to determine. This leads to an intricate interplay between ambiguous, negative and positive influences in our framework. * If the node is activated, then ambiguity among its inhibitors will make it ambiguous. * If at least one activator (inhibitor) is active, then there will be an activation (inhibition) push regardless of the status of the other activators (inhibitors). * A node will maintain its current status if its predecessors/regulators are inactive. * If the node is activated, then ambiguity among its activators will not affect it . * If the node is inactive, then ambiguity among its inhibitors will not affect it. Interesting questions about the spread of ambiguity in the network can be asked, e.g. in terms of the existence of phenotype attractors. We introduce in this paper a simple mathematical formalization of the concept of strong regulation. We discuss the phenotype attractor problem for strong regulatory graphs: whether attractors exist in which some of the variables are constant (they follow a given active/inactive phenotype), while the others can be arbitrary, even ambiguous. § STRONG REGULATORY GRAPHS (SRG) The model proposed here builds on the regulatory graphs structure <cit.> and adds to them the concept of strong regulation. Choosing how to update the status of a node is obvious if its active regulators concur in their influences: if they all exert an activation regulation on the node, then it gets activated (denoted as 1), and if all exert an inhibition regulation on it, then it gets inhibited (denoted as -1). A node under conflicting regulation, with some active regulators trying to activate it and others to inhibit it, is less clear how to update. The typical approach is to postulate an update function, Boolean or multi-valued, that sets its status to active or inactive, based on a specific schema of the status of its predecessors. The main difficulty of these approaches is that the experimental data and the level of detail in the model are rarely sufficient to specify such detailed update functions. Our proposal instead is that under such conflicting the status of the node should simply be recognized as being ambiguous (denoted as 0). This means that the node may potentially be active or inactive, but the level of detail in the model is not enough to specify it with confidence either way. The ambiguity of a node may well cascade down through the system, in that the regulation it exerts on its successor in the regulatory graph may be active or not. Still, an ambiguous regulation, may well be resolved by another active regulation on the same node. This opens the possibility to reason about the dynamic interplay between activation, inhibition, and ambiguity, where the semantics of activation and inhibition is in its strongest possible form to mean that the node has been unanimously activated/inhibited. A strong regulatory graph (SRG) is an edge-labeled graph G=(V, E) where V={v_1,v_2,…,v_n} is the finite set of vertices and E⊆ V× V is the set of directed edges. The labeling of the edges is done through the partition {E_+,E_-} of E. We say that edge (u,v)∈ E_+ is an activating edge and that u is an activator of v. We say that edge (u,v)∈ E_- is an inhibiting edge and that u is an inhibitor of v. A state of the SRG is a function α:V→{-1,0,1} whose intended meaning is that α(v)=-1 if v is inactive in state α, α(v)=1 if v is active, and α(v)=0 if v is ambiguous. If α(u)=1 (α(u)=0), we say that u is an activator (potential activator, resp.) of v in α if (u,v)∈ E_+ and that it is an inhibitor (potential inhibitor, resp) of v in α if (u,v)∈ E_-. The dynamics of a strong regulatory graph G=(V,E) is defined through a state-transition system. The state is given by the -1/0/1 activation status of all nodes in the graph. The set of states is thus {-1,0,1}^|V|, also represented as a function from V to {-1,0,1}. Given a state, a transition will indicate the change in the activation status of the nodes in the graph. Depending on whether we consider a synchronous update of the activation status (all nodes are simultaneously updated) or an asynchronous update (only one node is updated at a time), we obtain either a deterministic or a non-deterministic finite state transition system. For simplicity, we only discuss in this paper the synchronous activation update and the dynamics of strong regulatory graph in terms of deterministic state-transition systems. The asynchronous update strategy leads to a similar discussion. In defining the update rule for the status of a node, we follow proposal for the concept of strong regulation, where a node is activated/inhibited if all its regulators concur in their influences on the node. In case of conflicting influences, the status of the node is set to `ambiguous'. The influence of a regulator whose status is ambiguous is quite subtle, depending on the status of the other regulators and of the node itself. We discuss the intuition of the update rule below and then give it a formal definition. If there are no potentially active inhibitors and no potentially active activators, then the vertex preserves its activation status. A node v is set to be active after in two situations: * v was active and none of its inhibitors were potentially active, i.e., active or ambiguous (in this case, the node remains active, absent any potential or active inhibition regulation), or * at least one activator of v was active and none of its inhibitors were potentially active (in this case, the regulation on the node is un-ambiguous towards activation). Similarly, a node v becomes inactive after an application of the update rule in two situations: * v was inactive and none of its activators were potentially active, i.e., active or ambiguous, or * at least one inhibitor of v was active and none of its activators were potentially active. If the node is under conflicting influences, its status becomes ambiguous. Here are four cases when this happens: * at least one activator and at least one inhibitor of v were potentially active; this reflects a node under potentially conflicting regulation; * v was active and one of its inhibitors was ambiguous (regardless of the status of its activators); this reflects the ambiguity of whether there is an inhibitory influence on v or not, which may either leave the node active, or switch to inactive; * v was inactive and one of its activators was ambiguous (regardless of the status of its inhibitors); this reflects the ambiguity of whether there is an activating influence on v or not, which may either leave the node inactive, or switch it to active; * v was ambiguous and none of its regulators is active (i.e., they are either inactive or ambiguous); this reflects the situation where the status of v cannot be clarified because the status of its regulators is either inactive or ambiguous. For a vertex v∈ V we define its set _+(α, v) of activators and its set _-(α, v) of inhibitors in state α:V→{-1,0,1} as follows: _+(α, v) ={α(u)| (u,v)∈ E_+, α(u)∈{0,1}}, _-(α, v) ={α(u)| (u,v)∈ E_-, α(u)∈{0,1}}. Set _+(α, v) (_-(α, v)) contains 1 if v has at least one active activator (inhibitor, resp.) in state α and it contains 0 if v has at least one ambiguous activator (inhibitor, resp.) in state α. We also define the reflexive extension of to include the status of the vertex itself. The reflexive extension captures a sort of “inertia” of the dynamics (defined below) where a node may preserve its status in the absence of active regulators. _+(α, v) =_+(α, v)∪{α(v)|α(v)∈{0,1}}, _-(α, v) =_-(α, v)∪{-α(v)|α(v)∈{-1, 0}}, Note that an inactive node v, i.e. α(v)=-1, contributes 1 to _-(α, v), consistent with our suggestion of “inertia”: absent any active regulation on node v, it will remain remain inactive. Let G=(V,E) be a strong regulatory graph. Its dynamics is given through the state-transition system (S_G, T_G), where S_G={γ:V→{-1,0,1}}, and T_G⊆ S_G× S_G defined as follows: (α, β)∈ T_G (denoted also α→_Gβ) if β(v)= 1, if 1∈_+(α,v) and _-(α,v)=∅, -1, if 1∈_-(α,v) and _+(α,v)=∅, 0, otherwise. We discuss two regulatory graphs introduced in <cit.> and shown in Figure <ref>. The set of vertices is V={A,B,C}, the activation edges are shown with green pointed arrows and the inhibition edges with red blunt arrows. For the graph in Figure <ref>(a), consider the state α_1=(-1,1,1). Then * _+(α_1,A)={1} (since (C,A) is an activation edge and α_1(C)=1) and _-(α_1,A)={1} (since (B,A) is an inhibition edge and α_1(B)=1). Also, _+(α_1,A)={1} and _-(α_1,A)={1} (since α_1(A)=-1). * _+(α_1,B)=∅ (since the only incoming activation edge into B is (A,B) but α_1(A)=-1) and _-(α_1,B)=∅ (since there are no incoming inhibition edges into B). Also, _+(α_1,B)={1} and _-(α_1,B)=∅ (since α_1(B)=1). * _+(α_1,C)=_-(α_1,C)=∅ (since there are no incoming edges into C). Also, _+(α_1, C)={1} and _-(α_1,C)=∅ (since α_1(C)=1). This means that α_1→ (0,1,1). This corresponds well to our intuition for the meaning of the strong regulatory graphs: A is under conflicting regulation and it becomes set to ambiguous, B and C are under no active regulation and they preserve their status. Denote α_2=(0,1,1). Then * _+(α_2,A)={1} (since (C,A) is an activation edge and α_2(C)=1) and _-(α_1,A)={1} (since (B,A) is an inhibition edge and α_2(B)=1). Also, _+(α_2,A)={0,1} and _-(α_1,A)={0,1} (since α_2(A)=0). * _+(α_2,B)={0} (since (A,B) is an activation edge and α_2(A)=0) and _-(α_2,B)=∅ (since there are no incoming inhibition edges into B). Also, _+(α_2,B)={0,1} and _-(α_2,B)=∅ (since α_2(B)=1). * _+(α_2,C)=_-(α_2,C)=∅ (since there are no incoming edges into C). Also, _+(α_2, C)={1} and _-(α_2,C)=∅ (since α_2(C)=1). Consequently, α_2→α_2. The intuition for the update on A and C remains the same. For B, there is a potential activation regulation coming from the ambiguous status of its activator A, but since B is active in α_2, it will remain so in the next state. For the graph in Figure <ref>(b), the following are valid state transitions: * (-1,1,-1)→(1,1,-1)→(1,1,-1), * (1,-1,-1)→(1,1,-1), * (-1,-1,1)→(-1,-1,1) and * (1,-1,1)→(-1,1,1)→(0,1,1)→(0,1,1). The state transition graphs for these two examples are in Figure <ref>. The following observation shows that a vertex preserves its activation status, in the absence of potentially active predecessors. Let G=(V,E) be a strong regulatory graph, v∈ V in state α, with _+(α,v)=_-(α,v)=∅. If α→_G β, then β(v)=α(v). If α(v)=-1, then _-(α,v)={1} and so, β(v)=-1. If α(v)=1, then _+(α,v)={1} and so, β(v)=1. If α(v)=0, then _-(α,v)=_+(α,v)={0} and so, β(v)=0. We clarify now the conditions under which a vertex v is updated to an `ambiguous' state. Following Lemma <ref>, we only focus on the case when the vertex is under some (potential) regulators, i.e., when either _+(α,v)∅, or _-(α,v)∅, as otherwise v maintains its status unchanged. The theorem shows that our definition covers exactly the intuition offered just before Definition <ref>. * it simultaneously has both potential activators and potential inhibitors, or * it has no potential inhibitor, at least one of its activators is ambiguous and none is active, and the vertex is not currently active, or * it has no potential activator, at least one of its inhibitors is ambiguous and none is active, and the vertex is not currently inactive. Let G=(V,E) be an SRG and v∈ V in state α, with _+(α,v)∪_-(α,v)∅. If α→_G β, we have β(v)=0 if and only if * _+(α,v)∅ and _-(α,v)∅, or * _+(α,v)={0} and _-(α,v)=∅, or * _-(α,v)={0} and _+(α,v)=∅. β(v)=0 ⇔β(v) 1 β(v) -1 ⇔ (1∉_+(α,v) _-(α,v)∅) (1∉_-(α,v)_+(α,v)∅) ⇔ (1∉_+(α,v) 1∉_-(α,v)) (1∉_+(α,v) _+(α,v)∅) (_-(α,v)∅ 1∉_-(α,v)) (_-(α,v)∅_+(α,v)∅) Because of (<ref>) and the hypothesis that _+(α,v)∪_-(α,v)∅, we can consider in (<ref>), (<ref>) and in (<ref>) that one of the sets _+(α,v) and _-(α,v) is non-empty, while the other is empty. In (<ref>), we observe that 1∉_+(α,v) is equivalent to α(v) 1 1∉_+(α,v), which together with _+(α,v)∅ is equivalent with α(v) 1 _+(α,v)={0}. As we noted above, we can add to the conjunction also the term _-(α,v)=∅ since _+(α,v)∅. This gives us the clause α(v) 1 _+(α,v)={0}_-(α,v)=∅. Under the hypothesis of the theorem that _+(α,v)∪_-(α,v)∅, this clause is equivalent with 2'_+(α,v)={0}_-(α,v)=∅. Using a symmetric argument, we can conclude that (<ref>) can be replaced in the disjunction with 3'_-(α,v)={0}_+(α,v)=∅. Clause (<ref>) is equivalent with α(v)=0 1∉_+(α,v) 1∉_-(α,v). Since _+(α,v), _-(α,v)⊆{0,1} and, as noted above, we can assume that one is empty, while the other is not, it follows that clause (<ref>) can be replaced with (α(v)=0 _+(α,v)={0}_-(α,v)=∅) (α(v)=0 _-(α,v)={0}_+(α,v)=∅). This is equivalent with 1'(α(v)=0_+(α,v)={0}_-(α,v)=∅) (α(v)=0_-(α,v)={0}_+(α,v)=∅). These two conditions are absorbed within the disjunction under the clauses (<ref>) and (<ref>). This proves the claim of the lemma. The following result is a simple consequence of Definition <ref> and Theorem <ref> and it offers more insight into the dynamics of strong regulatory graphs. Let G=(V,E) be a strong regulatory graph, v∈ V and α, β two states of G. If α→_G β, then * β(v)=α(v), if α(v)=1 and _-(α,v)=∅, or α(v)=-1 and _+(α,v)=∅, or α=0 and 1∉_+(α,v)∪_-(α,v); * β(v)=1, if 1∈+-(α,v) and _-(α,v)=∅; * β(v)=-1, if 1∈_-(α,v) and _+(α,v)=∅; * β(v)=0, if _+(α,v)∅ and _-(α,v)∅. For any strong regulatory graph, there is a Boolean network with twice the number of variables, who state transition graphs are isomorphic. § PHENOTYPE ATTRACTORS We are interested in attractors that are defined through a fixed configuration (also called phenotype in <cit.>) on some (possibly not all) of their variables. In other words, we are interested in minimal cycles of the state transition systems, where some of the variables of the models are constant. This is similar to the concept of a target set in the partial controlability of complex networks <cit.>. The notion of phenotype was introduced for Boolean networks in <cit.> and discussed in connection with the control of Boolean networks in <cit.>. Let G=(V,E) be a strong regulatory graph. We define a trap set of its state transition graph in the usual way as a set S of vertices such that for any α→β, if α∈ S, then β∈ S. An attractor is defined as (non-empty) minimal trap set under set inclusion. A target T is a set of vertices T⊆ V and a T-phenotype is a function α_T:T→{-1,1} that gives an active/inactive status assignment of the vertices in T. An α_T-phenotype attractor A_α_T is an attractor of the state transition graph that has the phenotype α_T on T, i.e., for any β∈ A_α_T, β(t)=α_T(t), for all t∈ T. The phenotype problem is to decide whether for a given phenotype α, there exists an α-phenotype attractor. We prove that the problem can be solved in polynomial time. Even more, we give a simple characterization of phenotype attractors. We say that u is an inhibition-predecessor (activation-predecessor) of v if (u,v)∈ E_- ((u,v)∈ E_+, resp.) We say that u is an inhibition-ancestor (activation-ancestor) of v if there are vertices x_1, x_2,…, x_n, n≥ 1 with (x_i,x_i+1)∈ E_- ((x_i,x_i+1)∈ E_+, resp.) for all 1≤ i≤ n-1, x_1=u and x_n=v. Let T⊆ V be a target set and a phenotype α_T:T→{-1,1} on T. There exists an α_T-attractor if and only if the following two conditions hold: * for all v∈ T with α_T(v)=1 and for all u∈ T inhibition-predecessors of v, we have α_T(u)=-1; * for all v∈ T with α_T(v)=-1 and for all u∈ T activation-ancestors of v, we have α_T(u)=-1. Assume first that there is an α_T-attractor A_α_T. This means that for all α∈ A_α_T and for all t∈ T, α(t)=α_T(t). Consider v∈ T. If α_T(v)=1, then for all α∈ A_α_T, _-(α,v)=∅, i.e., its inhibition predecessors v∈ T have α_T(v)=-1. If α_T(v)=-1, then for all α∈ A_α_T, _+(α,v)=∅, i.e., its activation predecessors v have α_T(v)=-1. This argument can be iterated throughout all activation predecessors, yielding the second claim. Consider now a phenotype α_T satisfying properties (<ref>) and (<ref>). We can construct an α_T-attractor A_α_T in the following way. We visit the graph going against the edges, starting from the target set T, and marking them with -1 or 1 as we visit them. Let S be the set of nodes still to be explored. We start the exploration with S=T, whose marking is already set through α_T. Take all vertices in S marked with 1, consider all their inhibition predecessors, mark them with -1 and add them to S. Because of property (<ref>), none of them was marked with 1 and so, this marking will not contradict any previously set marking. Take all vertices in S marked with -1, consider all their activation predecessors and mark them with -1. Because of property (<ref>), none of them was marked with 1 and so, this marking will not contradict any previously set marking. Iterate through this step until set S stops growing. To construct an α_T-attractor, consider a state α_1:V→{-1,0,1} defined on S through the markings set above, and taking an arbitrary choice from {-1,0,1} for the vertices in V∖ S. We then consider the transitions α_1→α_2→α_3→⋯. Obviously, there will eventually be a repetition of states α_i=α_j, i<j in this sequence. The α_T-attractor is {α_i, α_i+1, …, α_j-1}. The following is a reformulation of Theorem <ref>. A vertex t is an active (inactive) target if t∈ T and α_T(t)=1 (α_T(t)=-1, resp.) An activation path is a path in a strong regulatory graph made of activation edges only. Let T⊆ V be a target set. For phenotype α_T:T→{-1,1} there is an α_T-attractor if and only if * there is no activation path from an active target to an inactive target and * there is no activation path plus a final inhibition edge between two active targets. § APPLICATION TO A REGULATORY CANCER NETWORK We discuss in this section a strong regulatory graph model of some of the key elements of the RTK (receptor tyrosine kinase) signaling through the MAPK- ((mitogen - activated protein kinase) and the PI3K/AKT- (lipid kinase phoshoinositide-3-kinase) pathways. Our goal is to demonstrate that the strong regulatory graphs are expressive enough to capture some interesting properties of a biological model. The MAPK signaling pathway communicates signals from outside the cell to the nucleus of the cell, it is involved in cell growth and proliferation, it is often mutated in cancer and a popular target of cancer treatments <cit.>. It interplays with the PI3K/AKT signaling pathway, a master regulator of the cell, whose activation contributes to the development of tumors and resistance to anticancer therapies <cit.>. We consider here a portion of the Boolean network model of <cit.>. The model is illustrated in Figure <ref>, with the key regulators shown as vertices, with green pointed arrows for the activation edges and red blunt arrows for the inhibition. The model includes FOXO_3, a protein known for its role in inducing cell death, often inhibited in tumors, and AKT, with a role in cell survival and often activated in tumors. Following <cit.>, the activation of RTK depends on the presence of growth factors, not included in the model, and so its status is set to be constant -1. We consider the (-1,1) status of (FOXO_3, AKT) as an indicator of uncontrolled proliferation, that of (1,-1) an indicator of non-proliferation, and that of (-1,-1) an indicator of moderate proliferation. (Because of the inactivation edge from AKT to FOXO_3, an (1,1)-attractor is not possible.) These are of course over-simplifications of a much more complex interplay of interactions. We show that the strong regulatory graph model has the same variety of outcomes as the Boolean network of <cit.>, while adopting the strong update rule proposed in this paper and the ambiguity status in case of conflicting regulation. In our discussion of the dynamics of the model, we write the variables in the order (RTK, RAS, PI3K, MAPK, PIP3, FOXO_3, AKT). The model has several phenotype attractors associated with all three possibilities of the (FOXO_3,AKT) phenotype, there are (-1,-1)-, (1,-1)-, and (-1,1)-attractors: * for s_1=(-1,-1,-1, 1,-1, -1, -1), s_1→ s_1; * for s_2=(-1,-1,-1,-1,-1, 1, -1), s_2→ s_2; * for s_3=(-1,1,1,1,1, -1, 1), s_3→ s_3. A mutation on PI3K, making it constant active, gives proliferation as the only stable set, consistent with the biological observations in <cit.> and with the behavior of the Boolean network model of <cit.>: for s_4=(-1,-1, 1,1,1, -1, 1), s_4→ s_4. Also consistent with the Boolean network model of <cit.> is the observation that setting PI3K to inactive (corresponding to the idea of targeting it with drug inhibitors), allows all proliferation outcomes to be possible: * (-1,-1, -1,-1,1, 1, -1)→ (-1,-1, -1,1,1, 1, 1)→ (-1,-1, -1,1,1, -1, 1)→(-1, -1, -1,1,1, -1, 1, with the final vertex begin a singleton attractor; * (-1,-1, -1,-1,-1, 1, -1) is a singleton attractor; * (-1,-1, -1,1,-1, 1, -1 )→ (-1,-1, -1,1,-1, -1, -1), with the final vertex begin a singleton attractor. § DISCUSSION Our main motivation in this article was to offer a solution to the gap in the current modeling of large network models. Current network models span between structural modeling approaches (well-supported by interaction data, scalable, but not dynamical) and detailed logical models (dynamical, but problematic to scale up because the data is typically not detailed enough to specify the update functions of each variable). We introduced the concept of strong regulatory graph, a form of logical network model where the active/inactive status of a vertex is set only if its regulators agree in their influences; otherwise, the status of the vertex is set to ambiguous. Ambiguity here means that the node may potentially be active or inactive, depending on the exact numerical setup of its predecessors, a level of detail that is left out of logical models. The ambiguity may have a cascading influence over the successors of the node, but it may also cancel out through the clarifications brought by other active vertices in the network. We discussed an update rule that defines the interplay between active, inactive, and ambiguous regulators. We also discussed the phenotype attractor problem and showed that it is easy to decide whether an attractor of a given phenotype exists in a strong regulatory graph. It is straightforward to see that for any strong regulatory graph there is a Boolean network that can simulate it. The state transition rule is indeed based on simple logical tests, and so easily implementable through Boolean functions. Also, the range of values {-1,0,1} can easily be implemented using two Boolean variables for each vertex of the strong regulatory graph. This observation connects the strong regulatory graphs to the rich literature on the control of Boolean networks. We discussed only the synchronous version of the strong regulatory graphs, where all vertices are updated simultaneously. Conceptually, the asynchronous version is similar, but it leads to a non-deterministic state-transition systems, whose properties should be interesting to characterize in a further study. Also left for another study is the study of controlability of strong regulatory graphs. Its targeted version can be easily defined in terms of reaching a phenotype attractor from any state with some minimal interventions in the state transition system. fundam
http://arxiv.org/abs/2306.02922v1
20230605143541
Warm Jupiters Beyond the Tidal Synchronization Limit May Exhibit a Wide Range of Secondary Eclipse Depths
[ "Emily Rauscher", "Nicolas B. Cowan", "Rodrigo Luger" ]
astro-ph.EP
[ "astro-ph.EP" ]
Emily Rauscher [email protected] 0000-0003-3963-9672]Emily Rauscher Department of Astronomy, University of Michigan, 1085 S. University Ave., Ann Arbor, MI 48109, USA 0000-0001-6129-5699]Nicolas B. Cowan Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8, CAN McGill Space Institute, 3550 rue University, Montréal, QC, H3A 2A7, CAN 0000-0002-0296-3826]Rodrigo Luger Center for Computational Astrophysics, Flatiron Institute, New York, NY, USA With JWST we can now characterize the atmospheres of planets on longer orbital planets, but this moves us into a regime where we cannot assume that tidal forces from the star have eroded planets' obliquities and synchronized their rotation rates. These rotation vectors may be tracers of formation and evolution histories and also enable a range of atmospheric circulation states. Here we delineate the orbital space over which tidal synchronization and alignment assumptions may no longer apply and present three-dimensional atmospheric models of a hypothetical warm Jupiter over a range of rotation rates and obliquities. We simulate the secondary eclipses of this planet for different possible viewing orientations and times during its orbital, seasonal cycle. We find that the eclipse depth can be strongly influenced by rotation rate and obliquity through the timing of the eclipse relative to the planet's seasonal cycle, and advise caution in attempting to derive properties such as albedo or day-night transport from this measurement. We predict that if warm Jupiters beyond the tidal limit have intrinsic diversity in their rotation vectors, then it will manifest itself as dispersion in their secondary eclipse depths. We explore eclipse mapping as a way to uniquely constrain the rotation vector of warm Jupiters but find that the associated signals are likely at the edge of JWST performance. Nevertheless, as JWST begins to measure the secondary eclipses of longer orbital period planets, we should expect to observe the consequences of a wider range of rotation states and circulation patterns. § INTRODUCTION One of the exciting developments in exoplanet characterization, especially as we enter the new JWST era, is the expansion of feasible observational targets to include ones beyond the hot Jupiter population. By observing planets that are smaller and that orbit farther from their host stars, we can expand our understanding of planetary physics into new regimes, bridging the expanse toward Solar System conditions. In this paper we focus on the characterization of gaseous planets on longer orbital periods, namely the “warm" Jupiter population. We expect to encounter new chemical regimes with these slightly cooler planets, as different molecular species are expected to be dominant in chemical equilibrium; for example, methane should take over as the primary carbon-bearing molecule, although current observations suggest many warm gaseous planets may be depleted in this species <cit.>. In addition, these planets have been identified as important targets for understanding planets' global compositions, as their radii are not subject to anomalous inflation like many hot Jupiters <cit.>. The formation pathway for warm Jupiter planets is directly related to the question of hot Jupiter formation and currently a murky picture exists where different pieces of empirical evidence point toward disparate formation mechanisms, perhaps indicating that multiple pathways are at work <cit.>. The rotation state of warm Jupiters will be set by their formation and then subsequently modified by their orbital evolution. The rotation speed of gas giants is inherited from the angular momentum of the gas from which they form and thought to be regulated by magnetic interaction with the circumplanetary disk <cit.>. The initial rotational obliquity of a planet could be misaligned with its orbital axis through formation conditions <cit.> and excited (or damped) through various processes after formation <cit.>, including processes related to any migration the planet may undergo <cit.>. Empirical determination of the obliquities of planetary mass companions on wide orbits are beginning to constrain their formation and evolution <cit.>. As more bright warm Jupiter planets are identified and their properties characterized <cit.>, we can hope to gain a clearer understanding of planet formation overall. In particular, warm Jupiters may maintain non-zero rotational obliquities (i.e., axial tilts) and faster rotation rates than hot Jupiters, and this could inform their histories of formation and subsequent evolution. Meanwhile, non-zero obliquities can induce seasonal variations in the atmospheres of these planets, introducing a wider range of possible states, potentially confusing the interpretation of characterization measurements. Here we consider the linked questions of: 1) how the a priori unknown rotation state of a warm Jupiter could impact its atmospheric structure, with implications for its secondary eclipse and 2) whether its rotation rate and obliquity could be empirically constrained from atmospheric characterization measurements. This work uses a similar set-up as <cit.>, but expands to include both different obliquities and a range of rotation rates. We discuss estimates of tidal alignment and circularization, with a focus on warm Jupiters, in Section <ref>. In Section <ref> we present our three-dimensional atmospheric model of a hypothetical warm Jupiter, for various rotation rates and obliquities. We present simulated secondary eclipses from those models in Section <ref> and consider whether we could retrieve spatial information from these by using JWST eclipse mapping in Section <ref>. We discuss some caveats to this work in Section <ref> and summarize our findings in Section <ref>. § TIDAL SYNCHRONIZATION AND ALIGNMENT OF WARM JUPITER ROTATION VECTORS Hot Jupiter planets are so close to their host stars that we expect tidal forces from the star to have put them into a rotation state that is synchronous (the rotation period equals the orbital period) and aligned (the rotation axis is parallel to the orbital axis). While the rotation and alignment of hot Jupiters has yet to be unambiguously empirically constrained <cit.>, our expectation for the longer orbital period warm Jupiters should be that they have a range of rotation rates and obliquities. We can estimate the timescales for synchronization and alignment <cit.> using the following expression from <cit.>, re-written using Kepler's 3rd Law to to show the dependence on orbital period (P) instead of semi-major axis: τ_tidal ∼ (G/16π^4) Q ω_p (M_p/R_p^3) P^4 τ_tidal (Gyr) ∼ 0.067 (Q/10^5) (ω_p M_p R_J^3/ω_J M_J R_p^3) (P/10 days)^4 where M_p, R_p, and ω_p are the mass, radius, and rotation rate of the planet, the subscript J references Jupiter's values for these quantities, and Q is the planet's tidal dissipation factor, which simplifies the complexities of tidal interactions into a single, uncertain parameter. In Figure <ref> we plot estimates for tidal synchronization and alignment timescales, compared to host star ages and over a range of orbital periods, for known exoplanets[From the NASA Exoplanet <cit.> , accessed November 29, 2022] with measured masses and radii (and R_p>0.4 R_J, to isolate gas giants). In all cases we assume an initial rotation rate and tidal Q appropriate for Jupiter <cit.>. Age estimates of the host stars can be highly uncertain and in some cases are missing; for the stars without a listed age, we use the mean value for this population (4.14 Gyr). Based on these estimates, planets with orbital periods longer than ∼30 days have tidal timescales longer than the system ages and so should not have had time for the planets' rotation vectors to synchronize and align. We highlight planets near the tidal boundary that are particularly bright, with host star K magnitudes less than or equal to WASP-18 <cit.>. These are promising prospects for measuring secondary eclipses of planets past the tidal boundary with JWST, as we will further elaborate on in Section <ref>. § THREE-DIMENSIONAL MODELS OF A HYPOTHETICAL WARM JUPITER Having the rotation vector of a planet a priori unknown (or un-assumed) expands the possible circulation states for its atmosphere. Previous models of warm Jupiters have explored the role of rotation rate <cit.>, obliquity <cit.>, and the interaction between these two parameters and additionally any orbital eccentricity <cit.>. In order to explore a wide parameter space, <cit.> used a simplified “shallow water" model with Newtonian cooling to simulate the atmospheric dynamics for gaseous planets with obliquities from 0-180, eccentricities of 0, 0.3, and 0.5, and three radiative timescales, which defined different forcing regimes (similar to hot Jupiters, warm Jupiters, and cold Jupiters). The authors identified distinct dynamical regimes, determined by obliquity and radiative timescale, while relatively insensitive to eccentricity. For planets with radiative timescales less than the rotation period (such as synchronously rotating hot or warm Jupiters), the atmosphere is dominated by a day-night contrast and a shifted hot spot. For planets where the radiative timescale is less than the orbital period but longer than the rotation period (such as quickly rotating warm Jupiters), the diurnal mean forcing is dominant and the equator is hotter than the poles for obliquities less than ∼18, while higher obliquities result in seasonal cycles, with the hottest region of the atmosphere moving back and forth between the poles. These results were then used in <cit.> to calculate simulated phase curves from the models; the authors found that the planet's eccentricity strongly influences the phase at which peak flux may be measured, as does the viewing orientation for planets with significant obliquity <cit.>. They also created an analytic theory to describe the phase curve behavior. Here we focus on just the secondary eclipses of warm Jupiters, which are less observationally expensive than full-orbit phase curves (especially for orbital periods ≥10 days) and can still constrain the atmospheric state and circulation pattern of these planets. For this study we construct a hypothetical warm Jupiter with the mass and radius of Jupiter, an orbital period of 10 days, and a Solar twin as its host. With these properties, the planet would only have a tidal timescale comparable to its system age in a fairly young system (≲1 Gyr). Planets on slightly longer orbital periods have much longer tidal timescales and so are less likely to be synchronized and aligned in more mature systems. The parameters for our hypothetical warm Jupiter still place it within the range of known systems and we choose these somewhat optimistically in order to search for observable features: by being closer to the star, the planet will be overall hotter, brighter, and have larger temperature differences. We also choose a circular orbit for our warm Jupiter, as our goal is to investigate the observable implications of the rotation rate and obliquity. We note that a warm Jupiter eccentricity could be constrained from high quality radial velocity data, providing a possible empirical constraint unavailable to the rotation vector. §.§ Numerical Set-Up To create a set of simulated atmospheric states for this hypothetical warm Jupiter, we use the three-dimensional atmospheric circulation model RM-GCM <cit.>, which solves a standard, simplified form of the full set of fluid equations, the “primitive equations of meteorology". Here we use the version that is coupled with a two-stream double-gray radiative transfer routine, updated as described in <cit.> to use the numerical scheme from <cit.>. Table <ref> lists the planetary parameters for our hypothetical warm Jupiter. The irradiation level assumes a Sun-like host and corresponds to an equilibrium temperature of 880 K. We use the same set of obliquities sampled in <cit.>, which only presented models for a warm Jupiter rotating as quickly as Jupiter (24.184 ω_orb). Here we consider a range of possible rotation rates, using a synchronous value and Jupiter's rotation rate as reasonable bookends on possible values. Based on some preliminary zero-obliquity models at more finely sampled rotation rates, we decided to run the full suite using the values shown in Table <ref> to capture the full range of zonal jet patterns (equatorial vs. high-latitude), photospheric temperature patterns (day-night vs. azimuthally symmetric), and equator-to-pole temperature gradients. Since the radiative transfer routine of RM-GCM was updated since it was used in <cit.>, we have recomputed the most quickly rotating models for this work and find no significant differences in the atmospheric structures. lcc Warm Jupiter Model Parameters Parameter Value Units Planetary radius, R_p 6.986×10^7 m Surface gravity, g 26 m s^-2 Orbital period, P_orb 10 days Orbital revolution rate, ω_orb 7.2722×10^-6 s^-1 Semi-major axis, a 0.090867 AU Eccentricity 0 – Stellar flux at substellar point 1.36×10^5 W m^-2 Albedo 0 – Equilibrium temperature 880 K Internal heat flux 5.7 W m^-2 Optical abs. coefficient, κ_opt 2.6×10^-3 cm^2 g^-1 Infrared abs. coefficient, κ_IR 5.2×10^-2 cm^2 g^-1 Rotation rate 1/[1, 2, 4, 12, 24.184] ω_Jup Obliquity [0, 10, 30, 60, 90] Orbital inclination 88.5 Impact parameter 0.511 – The planet's radius and gravity values are equal to those of Jupiter. The rotation rates are given relative to that of Jupiter, ω_Jup=1.7587 × 10^-4 s^-1, with ω_Jup/24.184=ω_orb being the synchronously rotating model. The stellar parameters are set equal to those of the Sun. For the most quickly rotating models, we determined in <cit.> that it was appropriate to use a diurnal averaging for the stellar irradiation pattern (i.e., instead of tracking the illuminated hemisphere, the incident flux at the top of the atmosphere varies only with latitude, averaged over one planetary rotation). The physical basis for this is roughly that the timescale over which the gas heats and cools is longer than the rotation period <cit.>. Here we again use this diurnal averaging for the Jupiter-rotation models, but for all of the other models we explicitly track the movement of the stellar irradiation pattern throughout the simulation. We used a timestep between radiative transfer calculations (for all models) such that the intervening movement of the substellar point was always less than a resolution element. The horizontal resolution for all models is T42 (a truncation of spectral modes beyond 42), corresponding to ∼2.8 at the equator. As in <cit.>, this was deemed to sufficiently resolve dynamical scales for the most quickly rotating model (where the scales are the smallest), using an estimate of the Rossby deformation radius to be ∼8. Our modeled atmosphere extends from 100 bar to 1 mbar, with 30 vertical levels evenly spaced in log pressure. We use hyperdissipation to remove noise at the smallest scales of the simulation, acting as an eighth-order operator on the wind and temperature fields, with a timescale (for all models) of about 720 seconds. All simulations used a timestep of ∼100 seconds, with radiative fluxes updated every 5 timesteps. We started each simulation at rest (zero winds), with a vertical temperature-pressure profile (uniform around the globe) equal to the analytic global average profile for the chosen flux conditions and our double-gray absorption coefficients <cit.>; these coefficients match those used in <cit.>, chosen to roughly reproduce a profile for a hypothetical planet with Jupiter properties, but 0.1 AU from its host star <cit.>. We ran each simulation for 750 orbits. This length was chosen by extending the zero-obliquity, Jupiter-rotation-rate model out to 3000 orbits and calculating when in the run the emitted fluxes from across the planet were within 1% of the value at 3000 orbits, indicating that the observable atmosphere had settled into a steady state. §.§ Model Results Our suite of models, sampling the full combination of five rotation rates (from synchronous to as fast as Jupiter) and five obliquities (from 0-90) exhibits significant diversity in atmospheric circulation patterns and temperature structures, influencing how these physical properties would show up in observations. Figure <ref> provides a representative set of the types of circulation patterns seen in our suite of models, including the presence or lack of seasonal variation for models with or without significant obliquity, respectively. The full set of animations showing maps of the bolometric thermal emission from each model is available on Zenodo at <cit.>. We find a difference in dynamical regimes between the planet models that rotate more quickly (ω_Jup, ω_Jup/2, and ω_Jup/4) and more slowly (ω_Jup/12 and synchronous ≈ω_Jup/24), in agreement with previous studies of atmospheric regime transitions for warm Jupiters <cit.>. The slowly rotating models have larger day–night temperature differences while the quickly rotating models have temperature structures that are more homogenized in longitude but with steeper latitudinal gradients. This can be understood from a combination of two factors: 1) the movement of the irradiation pattern in comparison to the radiative timescale of the atmosphere, and 2) whether the Coriolis force is dominant in balancing global pressure gradients or not. If we characterize the overall atmospheric response by a single radiative timescale, then our suite of models ranges from planets rotating slowly enough that the radiatively driven thermal structure can track the spatial irradiation pattern (i.e., a day-night temperature difference) to planets rotating quickly enough that the gas cannot heat and cool quickly enough to keep up with the changing irradiation pattern, but instead it is the rotationally averaged (i.e., diurnal) irradiation pattern that shapes the radiatively driven thermal structure (for zero obliquity this results in a hotter equator and cooler poles). The dynamical force balances across this regime are explained clearly and extensively in <cit.>; in short, in the more quickly rotating models the Coriolis force is stronger and is dominant in balancing the large-scale pressure gradients, while in the slowly rotating models advection is the balancing term in the momentum equation. The characteristic horizontal thermal gradients in an atmosphere are maintained by pressure gradients and <cit.> showed that for warm Jupiters the difference between these regimes results in the latitudinal temperature gradients in the quickly rotating models being several times larger than the slowly rotating models, especially away from the equator where the Coriolis force weakens. For non-zero obliquities the atmospheric structures become complex as the irradiation pattern moves in latitude, but the more quickly rotating models will still have temperature structures with stronger latitudinal gradients and minimized longitudinal differences than the slowly rotating models. The left column of Figure <ref> presents results with behavior typical of our quickly rotating models; shown are models with a rotation rate two times slower than Jupiter. For the models with rotation rates 1, 2, and 4 times slower than Jupiter, we find that for obliquities ≤30 the flux pattern is characterized by a brighter equator and cooler poles while for obliquities ≥60 there is a strong summer-winter hemisphere difference, with some lag time as the atmosphere responds to the changing latitudes of instellation. Uniquely, for the model rotating 4 times more slowly than Jupiter and with an obliquity of 90, globally the equator remains dimmer than the poles throughout the planet's entire orbit. As we use slower rotation rates for the planet, a bright region to the east of the substellar point becomes more prominent, but the latitudinal flux gradient still dominates the planet's overall global pattern. The right column of Figure <ref> shows behavior typical of our slowly rotating models, with flux patterns significantly different from the quickly rotating cases. The models shown here are in synchronous rotation states, but the models rotating roughly twice as quickly (12 times slower than Jupiter) show similar large-scale flux gradients. For the low obliquity cases (ψ≤ 30) the dominant flux pattern is a dayside brighter than the nightside, but with a shift of the brightest region to be eastward of the substellar point. In the case of zero obliquity this pattern remains static, albeit with some slight perturbations from small scale atmospheric waves,[In the most slowly rotating models, especially those with low obliquity, we see atmospheric waves influencing the flux patterns on the planet, but waves are also visually apparent in all but the most quickly rotating models. In the low obliquity models there are waves that appear to be quasi-stationary relative the substellar point, while in all models there are also waves that can be seen to travel around the globe.] while for the non-synchronous slowly rotating model (ω_Jup/12) this pattern moves with the substellar point, meaning that it moves in longitude along with the dayside hemisphere. For the slowly rotating models with a low but non-zero obliquity (ψ=10 or 30) this hemispheric, eastward-shifted pattern oscillates slightly up and down in latitude, following the seasonal cycle. The slowly rotating, high obliquity models show complex flux patterns, as demonstrated by the bottom right panel of Figure <ref>. Due to the poles receiving more flux than the equator over an orbital cycle, they are in general hotter and brighter than the equator. However, as the planet moves toward equinox and the equator receives more direct starlight, the dim ring that had extended around the equator now moves with the changing location of the terminator (the ring is 90 away from the substellar point). Then at equinox this pattern is broken and the day-night flux difference becomes a significant component of the global flux pattern, before lessening in significance as the substellar point continues to move to higher latitudes. A more extensive dynamical analysis of these circulation states is beyond the scope of this paper, but we point the reader to the thorough shallow-water analysis of warm Jupiters with various rotation rates, obliquities, and eccentricities in <cit.> and the aforementioned work of <cit.>. We find general agreement between our results and the models in <cit.>, although our range of rotation rates covers intermediate behavior between their regimes where day-night versus diurnal mean forcing is dominant. We also agree that obliquities less than 18 produce largely time-invariant atmospheric structure (in the diurnal mean forcing regime); however, our 30 obliquity model shows some seasonal variation, but without the hemispheric swings in temperature seen by <cit.>. We now move on to use our simulated set of possible warm Jupiter atmospheric states to quantitatively analyze how secondary eclipse measurements can be affected by the diverse possibilities, and remind the reader that a complementary analysis of thermal phase curves was presented in <cit.>. § SIMULATED SECONDARY ECLIPSES From our set of 3D atmospheric models we simulate secondary eclipse light curves using the software package <cit.>, which takes as input the two-dimensional planet flux maps and calculates simulated light curves from the defined system geometry, including the appropriate tilts and rotation of each model. We use the system parameters in Table <ref>, choosing to assume an orbital inclination of 88.5, which results in an impact parameter of ∼0.5. For this, or any other transiting planet with a non-zero obliquity, we may happen to observe secondary eclipse at any point during its seasonal cycle, and we will always observe the planet at that point every eclipse. For example, the secondary eclipse of the planet may always occur near equinox (in which case the rotation vector of the planet must be tilted near-parallel to our line of sight) or always near a solstice (in which case the rotation vector will be tilted toward or away from us). Using 90 snapshots of the flux from the planet, evenly spaced throughout its last simulated orbit and so evenly sampling its seasonal cycle, we orient each map to the correct geometry for secondary eclipse <cit.> and calculate a simulated light curve. We include an equal time before and after eclipse as during it, to mimic a typical eclipse observation, resulting in a simulated observation lasting 8.64 hours. The first-order quantity measured at secondary eclipse is the integrated flux from the dayside of the planet, which sets the depth of the eclipse. Figure <ref> shows these eclipse depths, both the median values and full ranges for each model of the planet, as a function of the assumed rotation rate and obliquity. Immediately apparent is that a wide range of eclipse depths are possible and planets with larger obliquities and faster rotation rates exhibit a greater variation in possibilities, depending on when secondary eclipse is observed relative to the seasonal cycle. This can be understood in terms of the dynamical states found above: only the quickly rotating models are able to maintain strong temperature gradients in latitude. For low obliquity cases, this results in the equator being warmer than the poles, with little variation throughout the planet's annual cycle; however, at high obliquity strong seasonal changes result in different flux patterns being observed for eclipses at different times throughout the orbit. If the secondary eclipse happens around the time of a solstice, we will see a very hot, bright dayside, whereas if secondary eclipse occurs around an equinox, we will instead see some of each hemisphere, both of which will be at only moderate temperatures. Meanwhile, the slowly rotating models have large day-night temperature differences and little-to-no seasonal variation (at low obliquity) or more globally homogenized temperatures (at high obliquity), which effectively also diminishes any seasonal variation. Since we always observe the dayside hemisphere in eclipse, this results in larger eclipse depths at low obliquity and somewhat smaller eclipse depths at high obliquity, but a small range of possible values for each state. At low obliquities (ψ≤ 30) and/or slow rotation (synchronous or ω_J/12) the eclipse depth variation is smaller (≤ 5%), while higher obliquity, quickly rotating models have a much larger range of values, up to ∼35%. Note that in almost all cases the eclipse depths are greater than the value for a planet with uniform temperature (equal to the zero-albedo equilibrium temperature), which can generally be interpreted as some inefficiency in moving heat from the day to night side; however, there are some instances (at high obliquity and fast rotation) that the eclipse depth would be below the equilibrium temperature. One example of why this happens can be understood for the slightly odd case of our model with a rotation rate of ω_Jup/4 and an obliquity of 90. Throughout this planet's orbit the equator is always colder than the poles. This means that for viewing orientations where the eclipse happens near equinox, we are preferentially seeing the cooler and dimmer regions of the planet, bringing the hemispheric flux below that of an equivalent uniform temperature planet. Traditionally this would be interpreted as a non-zero albedo on the planet, resulting in an overall cooler atmospheric state, but here that would be incorrect. We note this as a point of caution for the interpretation of warm Jupiter eclipse depths: seasonal effects can complicate the relation between albedo, day-night heat redistribution, and dayside flux <cit.>. We estimate the overall eclipse depth signal for our hypothetical planet by calculating the blackbody planet-to-star flux ratio: F_p/F_s=(R_p/R_s)^2(B_λ(T_p)/B_λ(T_s)), where B_λ(T) is the Planck function. Using the parameters for our system and assuming the planet's temperature equals its equilibrium temperature, we find a secondary eclipse depth of 690 ppm (parts-per-million) at 10 micron.[Commensurate with the other simplifications in this analysis, we choose to estimate flux ratios (here and later) at a single wavelength instead of integrated across the MIRI wavelength range. The planet-to-star flux ratio for this set up is a factor of ∼2 smaller at 5 micron than at 10 micron, so our single-wavelength values may be slight overestimates of what a more detailed calculation would find.] This is easily within the capabilities of the MIRI instrument on JWST. As an early example, <cit.> demonstrated that MIRI is able to measure the transit of a super-Earth to a precision of 13-15 ppm in the band averaged transit depth. Thus JWST would have no difficulty in measuring the secondary eclipse of our warm Jupiter, but also would be able to differentiate between the range of predictions offered by our models, as 14 ppm translates into about a 700 W m^-2 precision on the eclipse depth for this system. In other words, as JWST begins to observe secondary eclipses of warm Jupiters, beyond the regime where their rotation vectors are synchronized and aligned with their orbit, we should expect to see an increase in the dispersion of measured eclipse depths—if nature produces a range of rotation rates and obliquities among this population. § POTENTIALLY CONSTRAINING ROTATION VECTORS WITH ECLIPSE MAPPING If measured at sufficiently high precision, the secondary eclipse observation of an exoplanet can also constrain the latitude-longitude brightness map on the planet's dayside through the eclipse mapping technique <cit.>. Since the dayside flux pattern on our hypothetical warm Jupiter is strongly shaped by its assumed rotation rate and obliquity, here we assess whether there are spatial patterns detectable in eclipse mapping that could be used to uniquely constrain the planet's rotation vector. There are two pieces to this question: 1) what spatial information is intrinsically encoded in secondary eclipse observations, and 2) whether those signals are large enough to measure, compared to instrumental precision. We address each of these in turn. In order to identify what spatial structures in our planet models may be accessible to eclipse mapping, we fit each of the simulated secondary eclipse curves, described above, with the lightcurve for an uniform planet and a set of “eigencurves". These orthogonal curves are mathematically optimized and sorted such that the maximum spatial information can be extracted from the light curves <cit.>. We use the routine and do not include any noise, as the goal of this fit is to identify contributions from possible spatial inhomogeneity, not to assess whether it is measurable (at this stage). There are additional complications in eclipse mapping when the rotation vector of the planet is unknown, as this complicates the translation from observed flux at some time to which regions on the planet are visible. <cit.> explored this complexity and determined that the mapping of the planet should be able to correctly infer large-scale features—regardless of intrinsic obliquity and viewing angle—as long as the planet's rotation period remains longer than the timescale of ingress and egress, which is indeed the regime we consider. We use ten eigencurves to fit to the simulated eclipse curves, allowing a large number of components in order to capture small-scale signals in the light curves (associated with small features in the maps) but limited to ten because higher order terms have eigenvalues more than two orders of magnitude smaller, implying much less potential contribution to the signal. This resulted in a set of ten fit eigencurve coefficients for each rotation rate/obliquity model, for each of the 90 possible secondary eclipse curves simulated throughout its yearly cycle. We examined the numerical distributions of each of these ten coefficients, for combinations of coefficient values that could point toward specific underlying rotation rate/obliquity states. In Figure <ref> we show an example of how eclipse mapping could be used to try to constrain the warm Jupiter's rotation vector. Of the ten eigencurves included in the fit, the second eigencurve is generally the largest signal in the simulated eclipse curves, after the uniform component. This is because this eigencurve is associated with an eigenmap that captures any large-scale east-west flux gradient and so also induces a large “phase curve" signal outside of eclipse. Conveniently, as shown in Figure <ref>, the strength of any east-west flux gradient on the planet shows strong dependence on the planet's rotation rate. Although not fully unique, this signal in a planet's secondary eclipse curve could help to differentiate between possible rotation rates on the planet, due to the changes in circulation state discussed above. Used in combination with the eclipse depth, it could perhaps also be possible to isolate possible obliquity states as well, as the models with different rotation rates and obliquities sometimes occupy non-overlapping regions in the parameter space of these two observed properties. In addition, we calculated the signal that each of these eigencurves would contribute to the overall secondary eclipse measurement, as parts-per-million in the planet-to-star flux ratio. Most of the eigencurves are only significantly non-zero during the times of ingress and egress, and since they relate to spatial patterns that increase or decrease regions of the planet's brightness map away from uniform, they can have positive or negative values. Thus, in order to quantify how much each component contributed to the overall secondary eclipse measurement, we calculated a normalized root-mean-squared (RMS) value: signal=√(∫ F_e^2 dt/∫ F_0^2 dt), where the eigencurve (F_e) and uniform disk eclipse curve (F_0) are integrated over the entire time of the simulated observation. We then use Equation <ref> again to estimate the expected signal at 10 micron. In Figure <ref> we can see that, while the overall eclipse depth should be easily measurable by JWST, the signal from any east-west brightness structure is comparable to the precision of JWST/MIRI <cit.>. This second eigencurve had the largest signal of any of the eigencurves due to its partial phase curve component; others ranged in contribution from ∼1-15 ppm. These signals are tantalizingly at the edge of JWST capabilities. § DISCUSSION There are a few caveats to consider in this work. For one, we have chosen a 10-day orbital period for our hypothetical warm Jupiter, which for most systems would place it within the regime where tidal synchronization and alignment should be expected, unless it was a younger-than-average system.[It is also interesting to note that for a warm Jupiter on a ten day orbit but with a substantial obliquity, we might expect active tidal dissipation to produce a significant internal heat flux in the planet, potentially inflating the planet's radius <cit.> and influencing the atmospheric dynamics <cit.>. We neglect these considerations here.] This choice also results in stronger stellar forcing and likely larger spatial differences than warm Jupiters on longer orbital periods, representing an observationally optimistic set-up. We also considered a solar twin for the host star, but several important factors depend on the stellar host, all else being equal: the efficiency of tidal synchronization and alignment, the planet-to-star flux ratio, and the expected strength of atmospheric seasonal responses. <cit.> showed that when a seasonal cycle induces changes in stellar forcing, temporal variations in the planet's atmospheric response depend to first order on the ratio between its orbital period and radiative timescale, which for Jupiter-like atmosphere conditions is: P_orb/τ_rad≈ 5.94 ( T_s/T_⊙)^3 ( R_s/R_⊙)^3/2( M_s/M_⊙)^-1/2. For stellar masses ≲ 0.6 M_⊙ the radiative timescale exceeds the orbital timescale and seasonal effects become muted. We compare the influence of these various effects for Jupiter-like planets orbiting stars of different masses for a range of semi-major axes in Figure <ref>. To calculate the planet-to-star flux ratio we use Equation <ref>, assuming Jupiter's radius for all planets and adjusting the planet temperature to be the zero-albedo equilibrium temperature at each semi-major axis. For the stellar radii and temperatures as a function of stellar mass, we use empirical averages for mass bins across the main sequence <cit.>. These stellar parameters are also used in Equation <ref>, obtaining agreement on the stellar mass limit from <cit.>. We use the original form of Equation <ref> from <cit.>, written as a function of semi-major axis instead of orbital period, and assume the same planet parameters as in Section <ref>. Figure <ref> paints a complex picture; for a given semi-major axis, planets around lower mass stars are less likely to be tidally aligned and synchronized, but their atmospheres should have weaker seasonal responses. In terms of observability, the planet-to-star flux ratio at 10 micron peaks slightly below one Solar mass. At this wavelength we are not always within the Rayleigh-Jeans tail of the Planck function for these objects and so the trend with stellar mass is complex. This demonstrates that it is not trivial to identify the best targets for observing atmospheric seasonal responses, but in general we may prefer near-Solar stellar hosts. We used double-gray radiative transfer within our GCM; including more realistic radiative transfer would alter the temperature-pressure profiles and radiative forcing of the dynamics, but a study in the hot Jupiter context has shown that double-gray radiative transfer captures the same large-scale qualitative features as more complex models <cit.>, giving some confidence that our warm Jupiter results would similarly not be too strongly changed. We have also neglected other complicating atmospheric physics, such as disequilibrium chemistry and clouds. Both effects have the potential to alter the radiative forcing of the atmosphere and resulting dynamics, but would have to be unexpectedly strong in order to significantly change the circulation patterns <cit.>. Both clouds and disequilibrium chemistry, in addition to considering spectral emission from the planet instead of bolometric flux, could either diminish or enhance the flux differences we predict across the planet, depending on whether there is global cloud coverage or only cloud formation in the coolest regions <cit.>, respectively, and how disequilibrium chemistry changed the distribution of species' abundances across the globe. Spectral variation in the emission from regions of the planets with different temperatures, chemical abundances, and/or clouds could potentially provide more information about the planet's circulation state, through spectral eclipse mapping <cit.>, but that would require even more precise measurements in order to constrain wavelength-dependent spatial patterns. Finally, while we have focused on the spatial information available from eclipse mapping, one of our main findings is that the largest non-uniform spatial signal comes from the eigencurve that includes partial phase curve information. Since we also found that the corresponding spatial pattern—an east-west asymmetry in the brightness distribution—is related to the planet's rotation rate, it may be that the partial phase curve information available in JWST secondary eclipse observations of warm Jupiters will be enough to begin to differentiate their rotation states. However, this does rely on being able to accurately correct out any instrumental systematics that could be confused with the phase curve signal. Work by <cit.> finds that the east-west structure can be particularly degenerate with systematic signals, but that statistical evidence such as the Bayesian Information Criterion may be able to differentiate between eigencurve fits that accurately capture the systematics and planet signal from those that do not. There may similarly be clues that the systematics are being incorrectly fit if the planet map contains nonphysical structures. § CONCLUSIONS As we make atmospheric characterization measurements of planets on longer orbital periods, we will move into a regime where we can no longer assume that a planet's rotation period and axis are equal to and aligned with its orbital period and axis, as we do for hot Jupiters. From simple timescale estimates, we may expect that warm Jupiters with orbital periods ≳20 days have a priori unknown rotation vectors. We ran three-dimensional atmospheric circulation models of a hypothetical warm Jupiter, over a range of rotation rates and axial obliquities. We found moderate and strong seasonal variations in the atmospheres of planets with obliquities of 30 and >60, respectively. At low obliquities (≤30) the dominant atmospheric structures depend on the rotation rate, with more quickly rotating models (from 1-4 times slower than Jupiter) showing brighter equators than poles, while the more slowly rotating models (at or about twice the synchronous rotation rate) have an eastward-shifted hot-spot in addition to the latitudinal gradient. To examine the observable implications for atmospheric characterization of non-tidally synchronized and aligned planets, we calculated simulated secondary eclipse spectra from our models, for 90 points throughout each orbit to cover the full seasonal cycle. We showed that the secondary eclipse depth is a strong function of rotation rate, obliquity, and viewing orientation. For the higher obliquity and more quickly rotating models, where in the seasonal cycle we happen to catch the planet in secondary eclipse strongly influences how bright the dayside will be, with a spread in possible values as large as 35%. This can complicate the interpretation of secondary eclipse depths, especially for the outlier cases when the eclipse depth is lower than that expected for an uniform planet at the equilibrium temperature. The dispersion in possible secondary eclipse depths is large enough for JWST to be able to measure, resulting in our main prediction: if nature produces warm Jupiters with a range of rotation rates and obliquities, as a consequence we should expect to measure an increase in the dispersion of secondary eclipse depths for the population of gaseous planets beyond the tidal boundary. We evaluate whether eclipse mapping could be used to measure the brightness distribution across the planet and thereby constrain its rotation vector, by calculating “eigencurves" and fitting to the simulated eclipse curves. We find that there are unique spatial components that, in combination, could be used to constrain the rotation rate and obliquity, but that the expected signals are comparable to or below current estimates of JWST precision. Further work may need to be done, both in modeling and JWST data analysis, to determine whether warm Jupiters beyond the tidal boundary can be eclipse mapped by JWST. As JWST characterization efforts push out to measure secondary eclipses of longer orbital period planets, we will learn about atmospheric physics in new and interesting regimes. Here we have shown that once we move beyond the tidal synchronization and alignment boundary, any diversity in the rotation vectors of warm Jupiters should be observable as a population-level dispersion in their secondary eclipse depths. While eclipse mapping could potentially constrain a planet's rotation rate and obliquity, we found that the eclipse mapping signals are near or below the initial precision JWST/MIRI seems able to achieve, for our hypothetical warm Jupiter on an optimistically bright 10-day orbit. This implies that most warm Jupiters beyond the tidal boundary may produce smaller signals, but perhaps as analysis of JWST data continues to mature we will discover that these minute signatures are detectable. In that happy circumstance, eclipse mapping of warm Jupiters could be used to constrain the rotation vectors of these planets, which in turn would help to inform theories for their formation and evolution. We thank the anonymous referee, whose comments helped to improve and clarify this manuscript. This research was supported by NASA Astrophysics Theory Program grant NNX17AG25G and made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Jupyter notebook <cit.>, NumPy <cit.>, SciPy <cit.>, starry <cit.> aasjournal
http://arxiv.org/abs/2306.04350v1
20230607113409
A Hierarchical OPF Algorithm with Improved Gradient Evaluation in Three-Phase Networks
[ "Heng Liang", "Xinyang Zhou", "Changhong Zhao" ]
math.OC
[ "math.OC" ]
A Hierarchical OPF Algorithm with Improved Gradient Evaluation in Three-Phase Networks Heng Liang, Xinyang Zhou, Member, IEEE, and Changhong Zhao, Senior Member, IEEE This work was supported by the Hong Kong Research Grants Council through ECS Award No. 24210220. (Corresponding author: Changhong Zhao.) H. Liang and C. Zhao are with the Department of Information Engineering, the Chinese University of Hong Kong, New Territories, Hong Kong SAR, China (e-mail: [email protected]; [email protected]). X. Zhou is with the National Renewable Energy Laboratory, Golden, CO, USA (e-mail: [email protected]). This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. July 31, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Linear approximation commonly used in solving alternating-current optimal power flow (AC-OPF) simplifies the system models but incurs accumulated voltage errors in large power networks. Such errors will make the primal-dual type gradient algorithms converge to the solutions at which the power networks may be exposed to the risk of voltage violation. In this paper, we improve a recent hierarchical OPF algorithm that rested on primal-dual gradients evaluated with a linearized distribution power flow model. Specifically, we propose a more accurate gradient evaluation method based on a three-phase unbalanced nonlinear distribution power flow model to mitigate the errors arising from model linearization. The resultant gradients feature a blocked structure that enables us to further develop an improved hierarchical primal-dual algorithm to solve the OPF problem. Numerical results on the IEEE 123-bus test feeder and a 4,518-node test feeder show that the proposed method can enhance the overall voltage safety while achieving comparable computational efficiency with the linearized algorithm. Three-phase unbalanced networks, optimal power flow, distributed algorithm, voltage control. [01]𝒩Set of buses. [02]𝒩^+𝒩^+:=𝒩∪{0}, set of buses including the slack bus. [03]ℰSet of lines. [04]Φ_jPhase set of bus j. [05]Φ_ijPhase set of line (i,j). [06]V_j^ϕComplex phase voltage of bus j at phase ϕ. [07]V_jV_j:=(V_j^ϕ,ϕ∈Φ_j), column vector of complex voltage of bus j at all phases. [08]v_jv_j:=V_jV_j^H, matrix of complex squared voltage. [09]I_ij^ϕComplex current of line (i,j) at phase Φ_ij. [10]I_ijI_ij:=(I_ij^ϕ,ϕ∈Φ_ij), column vector of complex current of line (i,j) at all phases. [11]ℓ_ijℓ_ij:=I_ijI_ij^H, matrix of complex squared current. [12]S_ijS_ij:=V_i^Φ_ijI_ij^H, matrix of power flow at the sending-end of line (i,j). [13]Λ_ijColumn vector of sending-end three-phase power flow. [16]z_ijMatrix of series impedance of line (i,j). [17]p_j^ϕ/q_j^ϕActive/Reactive power injection of bus j at phase ϕ. [18]s_j^ϕComplex power injection of bus j at phase ϕ. [19]s_js_j:=(s_j^ϕ,ϕ∈Φ_j), column vector of complex power injection of bus j. [20]u_j^ϕu_j^ϕ:=[p_j^ϕ,q_j^ϕ]^⊤, power injection of bus j at phase ϕ. [21]pp:=[[p_1^ϕ]^⊤_ϕ∈Φ_1,…,[p_N^ϕ]^⊤_ϕ∈Φ_N]^⊤, three-phase active power injection vector. [22]qq:=[[q_1^ϕ]^⊤_ϕ∈Φ_1,…,[q_N^ϕ]^⊤_ϕ∈Φ_N]^⊤, three-phase reactive power injection vector. [23]vv:=[[v_1^ϕϕ]^⊤_ϕ∈Φ_1,…,[v_N^ϕϕ]^⊤_ϕ∈Φ_N]^⊤, three-phase squared voltage magnitude vector. [24]v/vLower/Upper bounds of three-phase squared voltage magnitude. [25]μ/μDual variables for the lower/upper bounds of three-phase squared voltage magnitude. [26]γConstant matrix for balanced voltage approximation. [50]xConjugate of complex number x. [51]v̂Voltages of linear approximation of BFM (other variables in BFM are similar, and the below is the same). [52]vVoltages of improved gradient evaluation of BFM. [53]v^ϕψ_iElement of complex matrix v_i at row ϕ, column ψ. § INTRODUCTION Optimal power flow (OPF) is a fundamental optimization problem that aims to find a cost-minimizing operating point subject to the constraints of physical laws and safety limits. OPF underlies many important power system applications such as demand response, state estimation, unit commitment, and voltage regulation. With the increasing penetration of controllable units (smart appliances, electric vehicles, energy storage devices, etc.) and thus the growing size of OPF problems, solving OPF is becoming more challenging with heavy computations and intense communications. Especially, the growth of wind and solar generations introduces increased variations to power systems and calls for timely response and real-time optimization. This trend poses a rising need for fast and scalable OPF solvers, which is especially urgent in power distribution networks where massive renewable energy resources and controllable units are being deployed. However, distribution networks are also where the algorithm speed and scalability requirements are most difficult to meet, as the high resistance-to-reactance ratios of distribution lines necessitate the usage of nonlinear and nonconvex alternate-current (AC) power flow models rather than their simple direct-current approximations. Numerous efforts have been made to overcome this challenge. Many of them conduct convex relaxation, such as semidefinite program (SDP) relaxation <cit.> and second order cone program (SOCP) relaxation <cit.>; see <cit.> for their connection and equivalence. Meanwhile, convex inner approximation and linearization of AC power flow were proposed to simplify the system models. The classical linearized distribution power flow model was derived by neglecting the non-linear line loss terms in <cit.>, while linearization around a fixed-point was shown to have a better approximation accuracy <cit.>. A comprehensive review of approximate models for AC power flow was provided in <cit.>. Based on these models, the existing methods often rely on off-the-shelf solvers to solve OPF, which often suffer from overwhelming computations and communications in large-scale networks. Distributed OPF algorithms were designed and shown to be more scalable in terms of computation and more robust to single point failure, compared to their centralized counterparts <cit.>. The distributed algorithms are typically implemented in two ways. The first way relies on information exchange between the neighboring buses <cit.>. The second way involves a central controller (CC), with all buses sending/receiving information to/from the CC <cit.>. Due to the complex interdependence and nonconvexity of the nonlinear AC power flow, most distributed OPF algorithms utilized linear approximation to facilitate the design process. To further reduce computational efforts associated with solving AC power flow, some OPF algorithms were implemented by iteratively actuating the power system with intermediate decisions and updating the decisions based on system feedback <cit.>. From the vast literature, we bring attention to a hierarchical distributed primal-dual gradient algorithm <cit.> and its extension to three-phase unbalanced networks <cit.>. This algorithm leveraged the radial structure of distribution networks to avoid repetitive computation and communication, and thus significantly accelerated the solution process of large-scale OPF problems. However, the gradient used in this algorithm was derived from the linearized distribution power flow model <cit.>. Such linearization, as we will analyze, may cause the solver to optimistically estimate nodal voltages to be safe, while they actually already violate safety limits. To prevent such violation, we develop an improved gradient evaluation method inspired by the chain rule of derivatives. Take the nonlinear single-phase branch flow model as an example <cit.>. The variables in that model can be grouped into three sets, which are the active and reactive power injections u:=(p,q), the squared voltage magnitudes v(u) and the branch power flows and squared currents (P,Q,ℓ). Then the partial derivative of squared voltage magnitudes over power injections can be taken as: ∂v(u)/∂u=∂v(u)/∂(P,Q,ℓ)·∂(P,Q,ℓ)/∂u, where we replace the second part with the partial derivatives ∂ (P̂,Q̂,ℓ̂)/∂u calculated from the linearized model (P̂(u),Q̂(u),ℓ̂(u)). In particular, the proposed method preserves an approximate calculation associated with the quadratic terms in the nonlinear AC power flow model, instead of disregarding them. It is worth pointing out that this method is not only applicable to single-phase networks, as shown in our preliminary conference paper <cit.>, but can also be extended to three-phase unbalanced networks as introduced in this paper. Our analysis shows that with moderate extra computations, the proposed method returns more accurate gradient evaluations than <cit.>, while preserving the blocked gradient structure therein that enables us to develop a scalable and voltage-safe hierarchical OPF algorithm. To summarize, the contributions of this paper are: * We propose an improved gradient evaluation method for three-phase unbalanced nonlinear AC power flow, which extends our previous work <cit.> on a single-phase model. * Based on the improved gradient evaluation, we design a voltage-safe hierarchical OPF algorithm for three-phase unbalanced power distribution networks. * We perform a rigorous voltage error analysis between the nonlinear distribution power flow model and its linear approximation. A convergence analysis of the proposed algorithm is presented to show the smaller sub-optimality gap of the result with more accurate gradient evaluation. * Numerical experiments on large-scale three-phase unbalanced networks, including the IEEE 123-bus test feeder and a 4,518-node test feeder, are performed to demonstrate the enhanced voltage safety and computational efficiency of the proposed method. The rest of this paper is organized as follows. Section <ref> introduces a three-phase unbalanced distribution network model, an OPF, and a primal-dual algorithm to solve it. Section <ref> motivates and proposes the improved gradient evaluation method. Section <ref> designs an improved hierarchical OPF algorithm and analyzes its convergence. Section <ref> reports the numerical experiments. Section <ref> concludes this paper. § MODELING AND PRELIMINARY ALGORITHM §.§ Branch Flow Model and OPF Formulation We denote the set of complex numbers, the set of n-dimensional complex vectors, and the set of m× n complex matrices by ℂ, ℂ^n, ℂ^m× n, respectively. Let x denote the conjugate of a complex number x∈ℂ. Let |·| take the dimension of a vector, and (·)^H be the conjugate transpose of a matrix or vector. We model a three-phase unbalanced distribution power network as a directed tree graph 𝒯:={𝒩^+, ℰ}, where 𝒩^+={0}∪𝒩, with bus 0 indexing the slack (root) bus and 𝒩={1,...,N} containing other buses. All the lines (i,j) ∈ℰ have reference directions that point from the root towards leaves, so that each bus j∈𝒩 has a unique upstream bus i, while the root bus has no upstream bus. Let a, b, c denote the three phases. We use Φ_j to denote the phase set of bus j and Φ_i j the phase set of line (i,j), e.g., Φ_j={a,b,c} if bus j has all the three phases, and Φ_ij={a} if line (i,j) has phase “a” only. For each phase ϕ∈Φ_j at bus j ∈𝒩^+, let V_j^ϕ∈ℂ be the complex voltage, and s_j^ϕ:=p_j^ϕ+𝐢q_j^ϕ be the complex power injection (i.e., power generation minus consumption at buses j ∈𝒩 and power flow from the upper grid into the distribution network at the slack bus). Define V_j:=(V_j^ϕ,ϕ∈Φ_j) ∈ℂ^|Φ_j|, s_j:=(s_j^ϕ,ϕ∈Φ_j)∈ℂ^|Φ_j| and v_j:=V_jV_j^H∈ℂ^|Φ_j|×|Φ_j|. For each line (i,j)∈ℰ, let I_ij^ϕ∈ℂ be the complex current on phase ϕ∈Φ_i j, and z_ij∈ℂ^|Φ_ij| ×|Φ_ij| be the series impedance matrix. Define I_ij:=(I_ij^ϕ,ϕ∈Φ_ij) ∈ℂ^|Φ_ij|, S_ij:=V_i^Φ_ijI_ij^H∈ℂ^|Φ_ij|×|Φ_ij| (where V_i^Φ_ij denotes the subvector of V_i on phases Φ_ij), and ℓ_ij:=I_ijI_ij^H∈ℂ^|Φ_ij|×|Φ_ij|. Let diag(·) denote the column vector composed of diagonal elements of a matrix; in the other way, Diag(·) converts a column vector into a diagonal matrix. Let v^ϕψ_i for ϕ,ψ∈Φ_i denote the element of complex matrix v_i at row ϕ, column ψ. Phases a, b, c are represented by numbers 0, 1, 2, respectively, wherever needed. Consider the nonlinear three-phase unbalanced distribution branch flow model (BFM) <cit.>: v_j= v_i^Φ_ij-(S_i j z_i j^H+z_i j S_i j^H)+z_i j ℓ_i j z_i j^H, ∀(i, j)∈ℰ, diag(S_i j-z_i j ℓ_i j)-∑_k:(j,k)∈ℰ diag(S_j k)^Φ_j=-s_j,  ∀j ∈𝒩^+, [[ v_i^Φ_ij S_i j; S_i j^H ℓ_i j ]] ≽0, ∀(i,j) ∈ℰ, rank([[ v_i^Φ_ij S_i j; S_i j^H ℓ_i j ]])=1, ∀(i, j) ∈ℰ. Suppose voltage V_0 and v_0 at the slack bus are given and fixed. We use [p_i^ϕ]_ϕ∈Φ_i to denote the column vector of active power injections across all the phases of bus i, and so on. Define p:=[[p_1^ϕ]^⊤_ϕ∈Φ_1,…,[p_N^ϕ]^⊤_ϕ∈Φ_N]^⊤, q:=[[q_1^ϕ]^⊤_ϕ∈Φ_1,…,[q_N^ϕ]^⊤_ϕ∈Φ_N]^⊤ as the three-phase power injection vectors, and v:=[[v_1^ϕϕ]^⊤_ϕ∈Φ_1,…,[v_N^ϕϕ]^⊤_ϕ∈Φ_N]^⊤ as the three-phase squared voltage magnitude vector. The function v(p,q) is implicitly well defined by the nonlinear BFM (<ref>) under normal operating conditions <cit.>. Consider the following three-phase OPF problem: min_p, q ∑_i ∈𝒩∑_ϕ∈Φ_i f_i^ϕ(p_i^ϕ, q_i^ϕ) s.t. v ⩽v(p, q) ⩽v, (p_i^ϕ, q_i^ϕ) ∈𝒴_i^ϕ, ∀ϕ∈Φ_i,∀i ∈𝒩, where f_i^ϕ(p_i^ϕ, q_i^ϕ) is a strongly convex cost function (e.g., a quadratic function) of the controllable power injection at phase ϕ of bus i. The power injections are confined by compact convex set 𝒴_i^ϕ for each i∈𝒩, ϕ∈Φ_i, for instance, a box: 𝒴_i^ϕ={(p_i^ϕ, q_i^ϕ) |p_i^ϕ⩽ p_i^ϕ⩽p̅_i^ϕ, q_i^ϕ⩽ q_i^ϕ⩽q̅_i^ϕ}. §.§ Primal-dual Gradient Algorithm Let μ and μ be the dual variables associated with the left-hand-side (LHS) and right-hand-side (RHS) of (<ref>), respectively. To design a convergent primal-dual algorithm, we consider the regularized Lagrangian of OPF problem (<ref>): ℒ_ϵ(p, q ; μ, μ)= ∑_i ∈𝒩∑_ϕ∈Φ_i f_i^ϕ(p_i^ϕ, q_i^ϕ)-ϵ/2μ_2^2 +μ^⊤(v-v(p, q))+μ^⊤(v(p, q)-v), where μ:=[μ^⊤,μ^⊤]^⊤ and ϵ >0 is a regularization factor. The OPF problem (<ref>) is naturally nonconvex due to the rank constraint (<ref>). It is known that function (<ref>) is strongly concave in dual variables μ <cit.>. A saddle point of (<ref>) serves as an approximate sub-optimal solution to OPF problem (<ref>), with its error bounded in terms of ϵ <cit.>. A primal-dual gradient algorithm to approach a saddle point of function (<ref>) takes the following form, for all buses h ∈𝒩 and phases φ∈Φ_h: p_h^φ(t+1)= [p_h^φ(t)-σ_u∂f(u(t))/∂p_h^φ -σ_u∑_j ∈𝒩 ∑_ϕ∈Φ_j ∂v_j^ϕϕ (u(t))/∂p_h^φ (μ_j^ϕ(t)-μ_j^ϕ(t))]_𝒴_h^φ, q_h^φ(t+1)= [q_h^φ(t)-σ_u∂f(u(t))/∂q_h^φ -σ_u∑_j ∈𝒩 ∑_ϕ∈Φ_j ∂v_j^ϕϕ(u(t))/∂q_h^φ (μ_j^ϕ(t)-μ_j^ϕ(t))]_𝒴_h^φ, μ_h^φ(t+1)= [μ_h^φ(t)+σ_μ(v_h^φ-v_h^φφ(t)-ϵμ_h^φ(t))]_+, μ_h^φ(t+1)= [μ_h^φ(t)+σ_μ(v_h^φφ(t)-v_h^φ-ϵμ_h^φ(t))]_+, where u:=[p^⊤,q^⊤]^⊤ is the vector of controllable power injections, and f(u(t)):=∑_i∈𝒩∑_ϕ∈Φ_if_i^ϕ(p_i^ϕ,q_i^ϕ) is the objective function (<ref>). The subscripts [·]_𝒴^φ_h and [·]_+ represent the projections onto the feasible power injection region 𝒴_h^φ and the non-negative orthant, respectively. Positive constants σ_u and σ_μ are the step sizes corresponding to the primal and dual updates, respectively. As ∂ f/∂ p_h and ∂ f/∂ q_h are local at each bus and easy to compute, the main difficulty in implementing dynamics (<ref>) is the calculation of ∂v/∂p and ∂v/∂q. Prior methods <cit.> used either backward-forward sweep or matrix inverse to calculate ∂v/∂p and ∂v/∂q, which involve heavy computation and may not be feasible for large-scale networks. Moreover, they are only applied to single-phase networks, not the more realistic three-phase networks. To the best of our knowledge, no existing work is using exact ∂v/∂p and ∂v/∂q based on the nonlinear three-phase BFM (<ref>). The challenges mainly lie in the complex interdependence between variables and the nonconvexity and nonlinearity induced by the rank-one constraint (<ref>). A common linear approximation to the three-phase BFM, e.g., in <cit.>, assumes negligible power loss and balanced voltages to obtain a simple evaluation of ∂v/∂p and ∂v/∂q. As we will analyze shortly, this linear approximation will expose the power network to the risk of voltage violation. The voltage feedback <cit.> from the physical (power) network can partly resolve such violation caused by modeling error. However, ubiquitous feedback across a network is either expensive or unavailable due to limited measurement capabilities in existing systems. § IMPROVED GRADIENT EVALUATION IN THREE-PHASE NETWORKS §.§ Motivation for Improvement We motivate the proposed improved gradient evaluation method by introducing the limitation of an existing linearized three-phase model, in which the following approximations are made <cit.>. Approx. 1: The line losses are small, i.e., z_ijℓ_ij≪ S_ij, and are thus ignored. Approx. 2: The voltages are nearly balanced, i.e., if Φ_i={a,b,c}, the three-phase voltage magnitudes are equal and the phase differences are close to 2π/3: V_i^a/V_i^b≈V_i^b/V_i^c≈V_i^c/V_i^a≈ e^𝐢2π/3. This leads to a linearized distribution network BFM: v̂_j= v̂_i^Φ_ij-(Ŝ_i j z_i j^H+z_i j Ŝ_i j^H), ∀(i,j)∈ℰ, Ŝ_i j=γ^Φ_ij Diag(Λ̂_i j),  ∀(i,j)∈ℰ, Λ̂_i j-∑_k: (j,k)∈ℰΛ̂_j k^Φ_j=-s_j,  ∀j ∈𝒩^+, where Λ̂_i j∈ℂ^|Φ_ij| is the column vector of sending-end three-phase power flow onto the line (i,j)∈ℰ, and constant matrix γ is defined as: γ:=ββ^H, where β=[[ 1; α; α^2 ]], α=e^-𝐢2 π / 3. The linearized BFM (<ref>) yields approximate gradients: ∂Λ̂_i j^ϕ/∂p_h^φ =1(ϕ=φ)1(j ∈ℙ_h), ∂Λ̂_i j^ϕ/∂q_h^φ =𝐢1(ϕ=φ)1(j ∈ℙ_h), ∂v̂_j^ϕϕ/∂p_h^φ = R_jh^ϕφ:=2 ∑_(s, t) ∈ℙ_j ∧h Re(z̅_s t^ϕφα^ϕ-φ ), ∂v̂_j^ϕϕ/∂q_h^φ = X_jh^ϕφ:=-2 ∑_(s, t) ∈ℙ_j ∧h Im( z̅_s t^ϕφα^ϕ-φ), where ℙ_j∧ h denotes the common part of the unique paths from bus j and h back to the root. Function 1(ϕ=φ) is an indicator that equals 1 if ϕ=φ and 0 otherwise; 1(j ∈ℙ_h)=1 if bus j lies on the unique path from bus h to the root, and 0 otherwise. Re(·) and Im(·) denote the real and imaginary parts of a complex number, respectively. Ignoring power loss (i.e., applying approximation Approx. 1) in the linearized BFM (<ref>) will induce an error in voltage prediction. To illustrate that error, we introduce an intermediate model that adds power loss back, while still making the balanced voltage approximation Approx. 2. This intermediate model is given by (<ref>) and (<ref>) below: S_i j =γ^Φ_ij Diag(Λ_i j),  ∀(i,j)∈ℰ, Λ_i j -∑_k: (j, k)∈ℰ Λ_j k^Φ_j=-s_j+diag(z_i j ℓ_i j),  ∀j ∈𝒩^+. The difference between v̂_h predicted by (<ref>) and v_h by (<ref>), (<ref>) for all h∈𝒩 is: v̂_h-v_h=∑_(ζ,ξ)∈ℙ_h(M_ζξz_ζξ^H+z_ζξM_ζξ^H-z_ζξℓ_ζξz_ζξ^H), where M_ζξ depends on the power losses downstream of bus ξ as: M_ζξ = γ^Φ_ζξDiag(diag(z_ζξℓ_ζξ)) +∑_(α,β)∈down(ξ)γ^Φ_αβDiag(diag(z_αβℓ_αβ)). See Appendix <ref>. Lemma <ref> reveals that the lossless model (<ref>) optimistically estimates the squared voltage magnitudes at all the buses, namely, even though the lossless model predicts the voltages to be safe, the actual voltages under the same power injections might already violate their safety limits. The need to prevent such voltage violation motivates us to develop an improved gradient evaluation method rather than simply using (<ref>) from the lossless model (<ref>). A similar voltage error analysis was provided, and an improved gradient evaluation method was proposed in <cit.> and its extended version on arXiv, for single-phase networks. This paper extends such analysis and method to three-phase unbalanced networks, as elaborated below. §.§ Improved Gradient Evaluation A major challenge for calculating the accurate gradients from BFM (<ref>) is the presence of the positive semidefinite (PSD) and rank-1 constraints (<ref>)–(<ref>). To deal with that, we relax (<ref>)–(<ref>) into equation (<ref>) in Lemma <ref> below. This relaxation extends the one in <cit.>. A necessary condition for PSD rank-1 constraints (<ref>)–(<ref>) is that for all ϕ,ψ, η∈Φ_ij, the following holds: v_i^ϕϕ ℓ_ij^ψη=S_ij^ϕηS_ij^ϕψ. The proof of Lemma <ref> is straightforward from the linear dependence of any pair of rows (columns) in the rank-1 matrix. As mentioned in Section <ref>, we replace (<ref>) with (<ref>), where (<ref>) utilizes (<ref>) based on the balanced voltage approximation Approx. 2, while (<ref>) preserves the term diag(z_ijℓ_ij) related to power loss. We call the modified model (<ref>), (<ref>), (<ref>) the BFM with balanced voltage approximation (BFM-BVA). Based on this model, the pertinent partial derivatives with respect to power injections u_h^φ:=[p_h^φ,q_h^φ]^⊤ at all buses h∈𝒩 and phases φ∈Φ_h are calculated as follows: ∂v_j^ϕϕ/∂u_h^φ=∂v_i^ϕϕ/∂u_h^φ-(∑_ψ∈Φ_ij∂S_i j^ϕψ/∂u_h^φ z_i j^ϕψ+∑_ψ∈Φ_ijz_i j^ϕψ ∂S_i j^ϕψ/∂u_h^φ) +∑_η∈Φ_ij∑_ψ∈Φ_ijz_i j^ϕψ ∂ℓ_i j^ψη/∂u_h^φ z_i j^ϕη,  ∀(i,j) ∈ℰ,  ∀ϕ∈Φ_ij, ∂S_i j/∂u_h^φ=γ^Φ_i j Diag(∂Λ_i j/∂u_h^φ),  ∀(i,j)∈ℰ, ∂Λ_ij^ϕ/∂u_h^φ-∑_ψ∈Φ_ijz_ij^ϕψ∂ℓ_ij^ψϕ/∂u_h^φ-∑_k: (j, k) ∈ℰ∂Λ_j k^Φ_jϕ/∂u_h^φ=-∂s_j^ϕ/∂u_h^φ,  ∀j ∈𝒩^+, ∀ϕ∈Φ_j, ∂ℓ_ij^ψη/∂u_h^φ=S_ij^ϕψ/v_i^ϕϕ∂S_ij^ϕη/∂u_h^φ+S_ij^ϕη/v_i^ϕϕ∂S_ij^ϕψ/∂u_h^φ- ℓ_ij^ψη/v_i^ϕϕ∂v_i^ϕϕ/∂u_h^φ,  ∀(i,j)∈ℰ,  ∀ψ,η,ϕ∈Φ_ij, where v_i^ϕϕ, namely the ϕ-th diagonal element in matrix v_i, is the squared voltage magnitude at bus i, phase ϕ. To decouple the complex interdependence of variables and partial derivatives in (<ref>), we simplify ∂ℓ/∂u by replacing the partial derivatives therein with their approximations from the linearized model (<ref>): ∂ℓ̂_ij^ψη/∂u_h^φ=S_ij^ϕψ/v_i^ϕϕ∂Ŝ_ij^ϕη/∂u_h^φ+S_ij^ϕη/v_i^ϕϕ∂Ŝ_ij^ϕψ/∂u_h^φ- ℓ_ij^ψη/v_i^ϕϕ∂v̂_i^ϕϕ/∂u_h^φ. By (<ref>), we calculate ∂ℓ̂/∂u as: ∂ℓ̂_ij^ψη/∂p_h^φ= -1/v_i^ϕϕ[ℓ_ij^ψηR_ih^ϕφ+ (S_ij^ϕψα^ϕ-φ1(η=φ).. ..+S_ij^ϕηα^(ϕ-φ)1(ψ=φ))1(j ∈P_h)], ∂ℓ̂_ij^ψη/∂q_h^φ= -1/v_i^ϕϕ[ℓ_ij^ψηX_ih^ϕφ+ (𝐢S_ij^ϕψα^ϕ-φ1(η=φ).. ..+𝐢S_ij^ϕηα^(ϕ-φ)1(ψ=φ))1(j ∈P_h)]. Replacing the partial derivatives on the RHS of (<ref>) with (<ref>), (<ref>) and (<ref>), we get the following improved gradient evaluation: ∂v_j^ϕϕ/∂p_h^φ=∂v̂_i^ϕϕ/∂p_h^φ-(∑_ψ∈Φ_ij∂Ŝ_i j^ϕψ/∂p_h^φ z_i j^ϕψ+∑_ψ∈Φ_ijz_i j^ϕψ ∂Ŝ_i j^ϕψ/∂p_h^φ) +∑_ψ∈Φ_ij∑_η∈Φ_ijz_i j^ϕψ ∂ℓ̂_i j^ψη/∂p_h^φ z_i j^ϕη =(1-1/v_i^ϕϕ∑_ψ∈Φ_ij∑_η∈Φ_ijℓ_ij^ψηz_ij^ϕψz_ij^ϕη)R_ih^ϕφ+1(j ∈P_h)· (2Re(z_ij^ϕφα^ϕ-φ)-2/v_i^ϕϕ∑_ψ∈Φ_ijRe(α^ϕ-φS_ij^ϕψz_ij^ϕψz_ij^ϕφ)), ∂v_j^ϕϕ/∂q_h^φ=∂v̂_i^ϕϕ/∂q_h^φ-(∑_ψ∈Φ_ij∂Ŝ_i j^ϕψ/∂q_h^φ z_i j^ϕψ+∑_ψ∈Φ_ijz_i j^ϕψ ∂Ŝ_i j^ϕψ/∂q_h^φ) +∑_ψ∈Φ_ij∑_η∈Φ_ijz_i j^ϕψ ∂ℓ̂_i j^ψη/∂q_h^φ z_i j^ϕη =(1-1/v_i^ϕϕ∑_ψ∈Φ_ij∑_η∈Φ_ijℓ_ij^ψηz_ij^ϕψz_ij^ϕη)X_ih^ϕφ+1(j ∈P_h)· (-2Im(z_ij^ϕφα^ϕ-φ)+2/v_i^ϕϕ∑_ψ∈Φ_ijIm(α^ϕ-φS_ij^ϕψz_ij^ϕψz_ij^ϕφ)). The major difference of the improved gradient (<ref>) from the previous approximate gradient (<ref>) is the following (similar to that between (<ref>) and (<ref>)). By (<ref>), we have R_jh^ϕφ=R_ih^ϕφ+2Re(z_ij^ϕφα^ϕ-φ)·1(j ∈ℙ_h), which can convert (<ref>) into: ∂v_j^ϕϕ/∂p_h^φ=R_jh^ϕφ-R_ih^ϕφ/v_i^ϕϕ∑_ψ∈Φ_ij∑_η∈Φ_ijℓ_ij^ψηz_ij^ϕψz_ij^ϕη -(2/v_i^ϕϕ∑_ψ∈Φ_ijRe(α^ϕ-φS_ij^ϕψz_ij^ϕψz_ij^ϕφ))·1(j ∈P_h). The first term on the RHS of (<ref>) is exactly (<ref>), while the second and the third terms partly compensate for the error due to ignoring power loss. Compared to the improved gradient evaluation in <cit.> for single-phase networks, the proposed evaluation (<ref>) considers mutual impedance between different phases in three-phase networks. § IMPROVED HIERARCHICAL OPF ALGORITHM Based on the improved gradients (<ref>) and the primal-dual algorithm (<ref>), we design a scalable algorithm to solve the OPF problem (<ref>). This algorithm shares a hierarchical structure similar to that in <cit.>, but is more reliable by using the proposed improved gradient evaluation, which can alleviate the voltage violation induced by ignoring power loss. The tree network 𝒯:={𝒩^+,ℰ} is divided into subtrees 𝒯_k={𝒩_k,ℰ_k} indexed by k∈𝒦={1,…,K} and a set of buses 𝒩_0 that are not clustered into any subtree. Let n_k^0 denote the root bus of subtree k, which is the bus in 𝒩_k that is nearest to the root bus of the whole network. The division of subtrees and unclustered buses may not be unique, but we assume it always satisfies the following conditions: All the subtrees are non-overlapping, i.e., 𝒩_k_1∩𝒩_k_2=∅ for any k_1,k_2 ∈𝒦,k_1 ≠ k_2. For any subtree root bus n_k^0, k ∈𝒦, or any unclustered bus n ∈𝒩_0, its path to the network root bus only goes through a subset of buses in 𝒩_0, but not any bus in another subtree. §.§ Three-Phase Hierarchical Algorithm The left subfigure of Figure <ref> shows a clustering of the IEEE 37-bus network that satisfies both Assumptions <ref> and <ref>, while the one in the right subfigure satisfies Assumption <ref> only, but not Assumption <ref>. How to optimally cluster a tree network under these two conditions is beyond the scope of this paper but is one of the interests for our future research. The above settings facilitate a scalable hierarchical management of the power network. Each subtree k∈𝒦 is represented by its root bus n_k^0 and managed by a regional controller (RC) that is also indexed by k. The subtree root buses together with the unclustered buses form a reduced tree, which is managed by a central coordinator (CC). Both the CC and RCs maintain two-way communication with their respective buses being managed. The hierarchical structure of CC-RC-subtree was proposed in <cit.> and now we adapt it to incorporate the improved gradient evaluation (<ref>). In particular, the primal-dual algorithm (<ref>) can be implemented locally and separately at each bus, except for the term ∂v/∂u^⊤(μ-μ) that in principle couples the entire network across buses. We now explain how to leverage the hierarchical structure to accelerate computation of this coupling term. We only elaborate the terms associated with active power injection p_h^φ at phase φ of bus h ∈𝒩, as those related to reactive power q_h^φ are calculated similarly. There are two cases to consider: Case 1: If h ∈𝒩_k is in subtree k ∈𝒦, we have: ∑_j ∈𝒩∑_ϕ∈Φ_j ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ)=∑_ϕ∈Φ_0∑_j ∈𝒩_k^ϕ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ) +∑_ϕ∈Φ_0[∑_k' ∈𝒦\{k} ∑_j ∈𝒩_k'^ϕ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ) +∑_j ∈𝒩_0^ϕ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ)], where Φ_0 denotes the phase set of the whole network, and 𝒩_k^ϕ (or 𝒩_0^ϕ) denotes the subset of buses in subtree k (or the subset of unclustered buses) that have phase ϕ. The first term on the RHS of (<ref>) sums over the buses in the same subtree k as bus h, which can be calculated by RC k using (<ref>). Specifically, from a power-flow solution in each iteration, RC k gets the current, power flow, and voltage (ℓ_ij, S_ij, v_i) associated with each bus j∈𝒩_k^ϕ, while it also knows impedance z_ij and whether j ∈ℙ_h. Upon gathering such information within subtree k, RC k conducts the calculation (<ref>), and then sends the result to bus h. The third term on the RHS of (<ref>) sums over unclustered buses j ∈𝒩_0^ϕ. Since all such buses are located in the reduced tree, this term can be calculated by the CC using (<ref>). Moreover, from the outside of subtree k, all of its buses can be represented by its root bus n_k^0. In particular, R_ih^ϕφ=R_in_k^0^ϕφ and 1(j∈ℙ_h) = 1(j∈ℙ_n_k^0) are known to the CC for the calculation of (<ref>). The second term on the RHS of (<ref>) sums over all buses j in subtrees other than subtree k that hosts bus h. By Assumption <ref>, there must be j ∉ℙ_h. Moreover, there is R_ih^ϕφ=R_n_k^0 n_k'^0^ϕφ for all h ∈𝒩_k,j ∈𝒩_k'^ϕ, and i being the unique upstream bus of j. This, by (<ref>), simplifies this term into: ∑_k' ∈𝒦\{k} ∑_j ∈𝒩_k'^ϕ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ)=∑_k' ∈𝒦\{k} R_n_k^0 n_k'^0^ϕφ· ∑_j ∈𝒩_k'^ϕ(1-1/v_i^ϕϕ∑_ψ∈Φ_ij∑_η∈Φ_ijℓ_ij^ψηz_ij^ϕψz_ij^ϕη)(μ_j^ϕ-μ_j^ϕ). In (<ref>), the summation over j ∈𝒩_k'^ϕ is calculated by RC k' and then uploaded to the CC. The CC receives such results from all RCs k' ∈𝒦\{k} and adds them up after weighting by R_n_k^0 n_k'^0^ϕφ. The CC finally adds the second and third terms on the RHS of (<ref>) over all phases ϕ∈Φ_0, and then sends the result to each RC k. Then RC k broadcasts that result to all the buses in subtree k, which then calculates (<ref>) and (<ref>). Case 2: If h ∈𝒩_0 is an unclustered bus, we have: ∑_j ∈𝒩∑_ϕ∈Φ_j∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ)=∑_ϕ∈Φ_0[∑_j ∈𝒩_0^ϕ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ) +∑_k ∈𝒦 ∑_j ∈𝒩_k^ϕ∂v_j^ϕϕ/∂p_h^φ(μ_j^ϕ-μ_j^ϕ)]. The idea to calculate (<ref>) through information exchange between bus h, all RCs k∈𝒦, and the CC is the same as that for the second and third terms in (<ref>). The only difference is that there is no longer a subtree managed by bus h, so the process becomes even simpler. To summarize, the computation of the key term ∂v/∂u^⊤(μ-μ) in the primal-dual algorithm (<ref>) can be performed through the coordination of the CC, RCs, and unclustered buses. They form a hierarchical structure, which inspires us to design Algorithm <ref> to accelerate such computation. Compared to the conventional centralized primal-dual gradient method, Algorithm <ref> can reduce computational complexity as can be analyzed in the same way as <cit.>. However, compared to <cit.> based on a lossless model, our improved gradient evaluation considers the effect of power loss on voltage estimation and thus more reliably guarantees voltage safety. Hierarchical OPF Algorithm For Three-Phase Unbalanced Distribution Networks §.§ Convergence Analysis In this part, we further justify the significance of improved gradient evaluation by analyzing the convergence of Algorithm <ref> to the local optimal point(s) of the actual nonconvex problem (<ref>) with improved gradient evaluation. To facilitate our analysis, we rewrite Lagrangian (<ref>) in a compact form: ℒ_ϵ(u,μ)=f(u)+μ^⊤(Av(u)-d)-ϵ/2μ^2_2, where A and d are the transformation matrices to make (<ref>) aligned with (<ref>), i.e., A=[diag(1_N), diag(-1_N)]^⊤, and d=[v^⊤,-v^⊤]^⊤. The dynamics (<ref>) with improved gradient evaluation (<ref>) to approach a saddle point of (<ref>) take the form: u(t+1)=[u(t)-σ(∇f(u(t))+∂v(u(t))/∂u^⊤A^⊤μ(t))]_𝒴, μ(t+1)=[μ(t)+νσ(Av(u(t))-d-ϵμ(t))]_+. A primal-dual local optimal point (u^*,μ^*) of the nonconvex OPF (<ref>) satisfies the KKT condition: u^*∈𝒴, μ^*≥0, ∇f(u^*)+∂v(u^*)/∂u^⊤A^⊤μ^* ∈-N_𝒴(u^*), Av(u^*)-d≥0, μ^*⊤(Av(u^*)-d)=0, where N_𝒴(u^*) is the normal cone of convex set 𝒴 at u^*∈𝒴, defined as {y∈ℝ^2(N-1):y^⊤(u-u^*)≤ 0, ∀u∈𝒴}. The conditions, under which the KKT point above is globally optimal, were discussed in <cit.>. However, their analyses were based on single-phase networks. The conditions to guarantee global optimality of (<ref>) in three-phase networks remain an open question for our future research. The discrepancy between gradient evaluation (<ref>) and the actual gradient given by the nonlinear model (<ref>) is bounded, i.e., there exists positive constant e_1 such that for all power injections u:=(p,q)∈𝒴: ∂v(u(t))/∂u-∂v(u(t))/∂u≤e_1. Assumption <ref> generally holds by our inference in Section <ref>. We define: L_v(δ)=sup_τ:τ≤δ∂v(u^*+τ)/∂u, M_μ= μ^*, M_v(δ)=sup_τ:τ≤δv(u^*+τ)-v(u^*)-∂v(u^*+τ)/∂u^⊤τ/τ^2. We provide a general explanation to the above definitions. L_v(δ) represents the Lipschitz constant and M_v(δ) characterizes the nonlinearity of function v(u) within a neighborhood of radius δ around the primal optimizer u^*. See <cit.> and references therein. Our difference lies in analyzing the impact of approximate gradient on the convergence of dynamics (<ref>), while references <cit.> assumed an accurate gradient. We define: Δ=inf_τ:τ≤δ∫_0^1∇_uu^2ℒ_ϵ(u^*+θτ,μ^*)dθ, and ρ(δ,σ,e_1)= [max{(1-σΔ)^2,(1-νσϵ)^2}+σAν^1/2 ·[(1-νσϵ)(M_v(δ)δ+e_1)+σ|Δ-νϵ|(L_v(δ)+e_1)] +νσ^2A^2(L_v(δ)+e_1)^2]^1/2. To measure the convergence, we define the norm: z_ν^2:=u^2+ν^-1μ^2. We then have the following theorem regarding convergence of dynamic (<ref>). Suppose there exists σ>0, ν>0, ϵ>0, and δ>0 such that: ρ(δ,σ,e_1)<1. And the initialization satisfies: z(1)-z^*_ν≤δ. The dynamics (<ref>) converge to a neighborhood of z^* as: lim sup_t →∞z(t)-z^*_ν= √(2)σM_μ(Ae_1+ϵν^1/2)/1-ρ(δ,σ,e_1). See Appendix <ref>. We now discuss the impacts of gradient error e_1. The first term on the RHS of (<ref>) √(2)σ M_μAe_1/1-ρ(δ,σ,e_1) is nearly proportional to e_1, the approximate gradient error. Moreover, ρ(δ,σ,e_1) is a monotonically decreasing function of e_1, which again indicates that (<ref>) is strictly decreasing with e_1. Reducing e_1 through an improved gradient evaluation will result in a better solution (i.e., a smaller sub-optimality gap) returned by dynamics (<ref>). § NUMERICAL RESULTS We consider two test systems. The first is the IEEE 123-bus test feeder. The second is a 4,518-node test feeder (where we adopt the convention that a “node” refers to a certain phase at a certain bus), which is composed of the primary side of the IEEE 8,500-node system and the ERPI Ckt7 system (with the secondary side lumped into primary-side transformers). Both feeders have unbalanced three phases. For the proposed hierarchical implementation, the 123-bus feeder is clustered as shown in Figure <ref>. The clustering of the 4,518-node feeder follows <cit.>. For simplicity, the loads on all unclustered buses are set as fixed. We make the following modifications to the original network models on the IEEE PES website (https://cmte.ieee.org/pes-testfeeders/resources/): * All the loads are treated as constant-power loads. The delta-connected loads are omitted, and detailed models of capacitors, regulators, and breakers are not simulated. * We double the original load data (used as the nominal case) of the 123-bus feeder to create scenarios with serious under-voltage issues. For each phase φ and bus h, we denote its nominal net power injection (which is negative for a load) by (p_h^φ, q_h^φ). The feasible power injection regions are defined as 𝒴_h^φ={(p_h^φ, q_h^φ) |p_h^φ⩽ p_h^φ⩽ 0.3p_h^φ <0, q_h^φ⩽ q_h^φ⩽ 0.3q_h^φ<0}. The objective function in OPF problem (<ref>) is defined as f_h^φ(p_h^φ, q_h^φ)=(p_h^φ-p_h^φ)^2+(q_h^φ-q_h^φ)^2 for each controllable load, which aims to minimize the disutility caused by the deviation from the nominal loads. For each test feeder, we fix the voltage magnitude of the root (slack) bus at 1.05 per unit (p.u.), and set the lower and upper bounds for safe voltage at 0.95 p.u. and 1.05 p.u., respectively. For the 123-bus feeder, the step sizes for primal and dual updates are chosen as σ_u=1.5× 10^-2 and σ_μ=5.7× 10^-3, respectively. For the 4,518-node feeder, we set σ_u=7× 10^-4 and σ_μ=3× 10^-5. Both of them are chosen empirically such that ρ(δ,σ,e_1)<1. For each given power injection u:=[p^⊤,q^⊤]^⊤, the OpenDSS platform is used to simulate the three-phase unbalanced power flow. The time for solving power flow by OpenDSS and interacting with OpenDSS is counted in our computational efficiency analysis. The nodal voltages V(u), line power flows Λ, and currents I can be read from the simulator. We then convert (V(u),Λ,I) to (v(u),S,ℓ), which are the variables in the BFM (<ref>). In the implementation, if we observe μ_j^ϕ(t)-μ_j^ϕ(t)=0 at any time t, we skip the associated gradient computation as it has no impact on (<ref>) and (<ref>). The Python 3.7 programs for the proposed OPF algorithm and the OpenDSS simulation are run on a laptop equipped with Intel Core i7-9750H CPU @ 2.6 GHz, 16 GB RAM, and Windows 10 Professional OS. §.§ Voltage Safety The voltage magnitudes at all the nodes of each of the two test feeders are plotted in Figure <ref>. There are three cases demonstrated in the figure. The first case, referred to as “no control", takes the nominal power injections (p,q), in which severe under-voltage violation is observed. The second case, referred to as “linear control", applies the primal-dual algorithm proposed in literature <cit.> for voltage regulation based on simplified gradient evaluation using lossless linearized BFM (<ref>). It enhances voltage safety to some extent, but still leaves many nodes below the lower limit. The third case, referred to as “improved control", implements the proposed Algorithm <ref> based on the improved gradient evaluation (<ref>), which successfully lifts all the nodes into the safe voltage range. This result verifies our analysis in Section <ref> that our method can prevent the voltage violation caused by ignoring the power loss during the previous model linearization. §.§ Computational Efficiency Figure <ref> shows the change of voltages at nodes from three different subtrees of each of the IEEE 123-bus feeder and the 4,518-node feeder. The dashed lines, corresponding to “linear control", converge to a point where under-voltage violation occurs. On the contrary, the solid lines, corresponding to “improved control", verify again that our method can prevent such voltage violation. Moreover, Table <ref> displays the time to run 2,000 iterations (which is more than needed for convergence as shown in Figure <ref>) in the 4,518-node feeder. In the table, the proposed “improved control” is compared with both the hierarchical and centralized implementations of “linear control” using the lossless model (the implementations in literature <cit.> and <cit.>, respectively). We observe that the proposed hierarchical algorithm with improved gradient evaluation saves over 50% time than the centralized method, even if the latter used a simpler lossless model. Compared to the previous hierarchical method using the lossless model, the proposed method with improved gradient evaluation is slower by 14% to 36% in different subtrees and by 76% in the whole network, due to the existence of bottleneck subtrees in our sequential simulation environment. Such increase in computation time can be alleviated by a real parallel implementation over subtrees. It is also an acceptable compromise considering the improved voltage safety of the proposed method. §.§ Comparison with SDP relaxation We compare the proposed “improved control” by running 300 iterations (which is more than needed for convergence as shown in the left panel of Figure <ref>), and compare its result with SDP relaxation in IEEE 37-bus feeder and IEEE 123-bus feeder with the same setup. We omit 4,518-node feeder due to the extremely high computational burden of SDP relaxation. The generic optimization solver sedumi <cit.> based on Matlab 2016a is used to solve the SDP relaxation of OPF problem (<ref>). Detailed models about SDP relaxation of (<ref>) are referred to <cit.>. Table <ref> displays the results for cases where voltage constraints (<ref>) are not binding or binding, by setting different nominal load data. In the case that voltage constraints are not binding, both methods can solve OPF. The proposed “improved control” achieves ×1.97 speedup in IEEE 37-bus feeder and ×10.91 speedup in IEEE 123-bus feeder. However, in the case that voltage constraints are binding, SDP relaxation fails to solve the OPF (<ref>) due to violation of the rank-one constraint (<ref>) <cit.>. The proposed improved control method still works in this case by returning a feasible and near-optimal solution. Moreover, the proposed method achieves ×2.60 speedup in IEEE 37-bus feeder and ×12.19 speedup in IEEE 123-bus feeder. § CONCLUSION We proposed a more accurate gradient evaluation method for three-phase unbalanced distribution networks. The resultant gradients with a blocked structure allow us to further design a scalable hierarchical OPF algorithm. The proposed method prevents the previous voltage violation induced by the negligence of power loss in a linearized model, while achieving satisfactory computational efficiency, as verified by our theoretical analyses and numerical results. In the future, we shall further improve the accuracy of gradient evaluation with more efficient computation. We will devote more effort to theoretical analyses of convergence and optimality of the proposed algorithm. Besides characterizing algorithm performance with respect to a saddle point or a KKT point (which is often locally optimal for a nonconvex problem), we are interested in exploring its global sub-optimality bound. We are also adapting the proposed method to a time-varying setting that incorporates time-coupling constraints (e.g., for energy storage) to enable fast online solutions. § PROOF OF LEMMA <REF> Take the difference between the intermediate model (<ref>), (<ref>) and the linearized lossless model (<ref>): v̂_j-v_j = v̂_i^Φ_ij-v_i^Φ_ij-((Ŝ_i j-S_ij) z_i j^H +z_i j( Ŝ_i j-S_ij)^H)- z_i j ℓ_i j z_i j^H, Ŝ_ij-S_ij =γ^Φ_ijDiag(Λ̂_ij-Λ_ij), Λ̂_ij-Λ_ij =∑_k:(j,k)∈ℰ(Λ̂_jk^Φ_j-Λ_jk^Φ_j)-diag(z_ijℓ_ij). Notice that there exists a power flow error term Ŝ_ij-S_ij on the RHS of (<ref>), and this error is calculated in (<ref>)–(<ref>). Following the tree structure of the network, we recursively calculate Ŝ_ij-S_ij through (<ref>)–(<ref>) until reaching leaves: Ŝ_ij-S_ij= ∑_k:(j,k)∈ℰ(Ŝ_jk^Φ_j-S_jk^Φ_j) -γ^Φ_ijDiag(diag(z_ijℓ_ij)), Ŝ_jk-S_jk= ∑_l:(k,l)∈ℰ(Ŝ_kl^Φ_k-S_kl^Φ_k) -γ^Φ_jkDiag(diag(z_jkℓ_jk)), … Ŝ_th-S_th= -γ^Φ_thDiag(diag(z_thℓ_th)), for leaf buses h. The bus indices in (<ref>) are illustrated in Figure <ref>. Recursively substituting such equations for each layer of the network into its upper layer, i.e., (<ref>), …, into (<ref>), then into (<ref>), we have: Ŝ_ij -S_ij= -∑_k:(j,k)∈ℰ∑_l:(k,l)∈ℰ…∑_h:(t,h)∈ℰγ^Φ_thDiag(diag(z_thℓ_th)) -...-∑_k:(j,k)∈ℰ∑_l:(k,l)∈ℰγ^Φ_klDiag(diag(z_klℓ_kl)) -∑_k:(j,k)∈ℰγ^Φ_jkDiag(diag(z_jkℓ_jk)) -γ^Φ_ijDiag(diag(z_ijℓ_ij)). In fact, the RHS of (<ref>) is related to the power losses downstream of bus i. Define M_ij:=S_ij-Ŝ_ij, which can be written as: M_ij =γ^Φ_ijDiag(diag(z_ijℓ_ij)) +∑_(α,β)∈down(j)γ^Φ_αβDiag(diag(z_αβℓ_αβ)). From the tree structure, we have: M_ij=∑_k:(j,k)∈ℰM_jk+γ^Φ_ijDiag(diag(z_ijℓ_ij)). The relationship (<ref>) indicates a layered relationship in the tree network. A backward sweep from the leaves to the root will return M_ij,  ∀ (i,j)∈ℰ. Based on M_ij, the voltage error (<ref>) can be written as: v̂_j-v_j= v̂_i^Φ_ij-v_i^Φ_ij+(M_ijz_ij^H+z_ijM_ij^H)-z_ijℓ_ijz_ij^H. Equation (<ref>) implies an accumulated voltage error from the root to leaf buses. A forward sweep from the root to leaves will return the voltage errors at all buses h∈𝒩 as: v̂_h-v_h=∑_(ζ,ξ)∈ℙ_h(M_ζξz_ζξ^H+z_ζξM_ζξ^H-z_ζξ ℓ_ζξz_ζξ^H). This completes the proof of Lemma 1. § PROOF OF THEOREM <REF> The KKT condition (<ref>) implies: u^*=[u^*-σ(∇f(u^*)+∂v(u^*)/∂u^⊤A^⊤μ^*)]_𝒴, μ^*=[μ^*+νσ(Av(u^*)-d)]_+. Comparing (<ref>) and (<ref>), and by the non-expansion of projection and the definition (<ref>) of Δ, we have: u(t+1)-u^*^2≤ (1-σΔ)(u(t)-u^*)-σ∂v(u(t))/∂u^⊤A^⊤(μ(t)-μ^*) . . -σ(∂v(u(t))/∂u-∂v(u(t))/∂u)^⊤A^⊤μ^*^2, μ(t+1)-μ^*^2 ≤(1-νσϵ)(μ(t)-μ^*)-νσϵμ^*. .+νσA (v(u(t))-v(u^*))^2. Moreover, the definitions (<ref>) and Assumption <ref> imply: ∂v(u(t))/∂u^⊤(μ(t)-μ^*)≤(L_v(δ)+e_1)μ(t)-μ^*, v(u(t))-v(u^*) ≤L_v(δ)u(t)-u^*, (∂v(u(t))/∂u-∂v(u(t))/∂u)^⊤μ^*≤e_1M_μ and v(u(t))-v(u^*)-∂v(u(t))/∂u^⊤(u(t)-u^*) ≤(M_v(δ)δ+e_1)u(t)-u^*. By using the norm z_ν^2 and combining the bounds in (<ref>) and (<ref>), we have: z(t+1)-z^*_ν^2 ≤ ((1-σΔ)^2+νσ^2A^2L_v^2(δ))u(t)-u^*^2 +((1-νσϵ)^2+νσ^2A^2(L_v(δ)+e_1)^2)ν^-1μ(t)-μ^*^2 +2σ[(1-νσϵ)(M_v(δ)δ+e_1)+σ|Δ-νϵ|(L_v(δ)+e_1)] ·Au(t)-u^*μ(t)-μ^* +2σM_μϵ ·[νσAL_v(δ)u(t)-u^*+(1-νσϵ)μ(t)-μ^*] +2σM_μe_1 A ·[(1-σΔ)u(t)-u^*+σA(L_v(δ)+e_1)μ(t)-μ^*] +σ^2 M_μ^2(A^2e_1^2+ϵ^2ν). Furthermore, by using Young's inequality, we have: u(t)-u^*μ(t)-μ^* ≤ 1/2√(ν)(u(t)-u^*^2+ν^-1μ(t)-μ^*^2). And 2√(2)σM_μϵν^1/2[ν^1/2σAL_v(δ)/√(2)u(t)-u^* +(1-νσϵ)/√(2)ν^-1/2μ(t)-μ^*] ≤2√(2)σM_μϵν^1/2[νσ^2A^2L_v^2(δ)u(t)-u^*^2 +(1-νσϵ)^2ν^-1μ(t)-μ^*^2]^1/2 , 2√(2)σM_μe_1A[(1-σΔ)/√(2)u(t)-u^* +ν^1/2σA(L_v(δ)+e_1)/√(2)ν^-1/2μ(t)-μ^*] ≤2√(2)σM_μe_1A[(1-σΔ)^2u(t)-u^*^2 +νσ^2A^2(L_v(δ)+e_1)^2ν^-1μ(t)-μ^*^2]^1/2. Then by combining (<ref>), (<ref>) with (<ref>), and by the definition ρ(δ,σ,e_1) in (<ref>), we can show that: z(t+1)-z^*_ν^2 ≤ρ^2(δ,σ,e_1)z(t)-z^*_ν^2 +2√(2)σM_μ(ϵν^1/2+e_1 A)ρ(δ,σ,e_1)z(t)-z^*_ν + (√(2)σM_μ(Ae_1+ϵν^1/2))^2. It is equivalent to: z(t+1)-z^*_ν ≤ρ(δ,σ,e_1)z(t)-z^*_ν +√(2)σM_μ(Ae_1+ϵν^1/2) ≤ρ^t(δ,σ,e_1)z(1)-z^*_ν +1-ρ^t(δ,σ,e_1)/1-ρ(δ,σ,e_1)√(2)σM_μ(Ae_1+ϵν^1/2). Since ρ(δ,σ,e_1)<1, it follows that: lim sup_t →∞z(t)-z^*_ν=√(2)σM_μ(Ae_1+ϵν^1/2)/1-ρ(δ,σ,e_1). This completes the proof of Theorem 1. 1 IEEEtran bai2008semidefinite X. Bai, H. Wei, K. Fujisawa, and Y. Wang, “Semidefinite programming for optimal power flow problems,” Int. J. Electr. Power Energy Syst., vol. 30, no. 6–7, pp. 383–392, 2008. farivar2013branch M. Farivar and S. H. Low, “Branch flow model: Relaxations and convexification—part I,” IEEE Trans. Power Syst., vol. 28, no. 3, pp. 2554–2564, 2013. low2014convex S. H. Low, “Convex relaxation of optimal power flow—part I: Formulations and equivalence,” IEEE Trans. Control Netw. Syst., vol. 1, no. 1, pp. 15–27, 2014. low2014convexii S. H. Low, “Convex relaxation of optimal power flow—part II: Exactness,” IEEE Trans. Control Netw. Syst., vol. 1, no. 2, pp. 177–189, 2014. baran1989optimalC M. Baran and F. F. Wu, “Optimal sizing of capacitors placed on a radial distribution system,” IEEE Trans. Power Del., vol. 4, no. 1, pp. 735–743, 1989. baran1989optimalCii M. Baran and F. F. Wu, “Optimal capacitor placement on radial distribution systems,” IEEE Trans. Power Del., vol. 4, no. 1, pp. 725–734, 1989. Bernstein2018loadflow A. Bernstein, C. Wang, E. Dall’Anese, J. -Y. Le Boudec and C. Zhao, “Load flow in multiphase distribution networks: existence, uniqueness, non-singularity and linear models," IEEE Trans. Power Syst., vol. 33, no. 6, pp. 5832–5843, 2018. ESchweitzer2020lossy E. Schweitzer, S. Saha, A. Scaglione, N. G. Johnson and D. Arnold, “Lossy distflow formulation for single and multiphase radial feeders," IEEE Trans. Power Syst., vol. 35, no. 3, pp. 1758–1768, 2020. RCheng2022onlinelinear R. Cheng, Z. Wang and Y. Guo, “An online feedback-based linearized power flow model for unbalanced distribution networks," IEEE Trans. Power Syst., vol. 37, no. 5, pp. 3552–3565, 2022. molzahn2019survey D. K. Molzahn and I. A. Hiskens, “A survey of relaxations and approximations of the power flow equations,” Found. Trends Electr. Energy Syst., pp. 1–221, 2019. dall2013distributed E. Dall'Anese, H. Zhu, and G. B. Giannakis, “Distributed optimal power flow for smart microgrids,” IEEE Trans. Smart Grid, vol. 4, no. 3, pp. 1464–1475, 2013. erseghe2014distributed T. Erseghe, “Distributed optimal power flow using ADMM,” IEEE Trans. Power Syst., vol. 29, no. 5, pp. 2370–2380, 2014. zhang2014optimal B. Zhang, A. Y. Lam, A. D. Domínguez-García, and D. Tse, “An optimal and distributed method for voltage regulation in power distribution systems,” IEEE Trans. Power Syst., vol. 30, no. 4, pp. 1714–1726, 2015. peng2016distributed Q. Peng and S. H. Low, “Distributed optimal power flow algorithm for radial networks, I: Balanced single phase case,” IEEE Trans. Smart Grid, vol. 9, no. 1, pp. 111–121, 2018. peng2015distributed Q. Peng and S. H. Low, “Distributed algorithm for optimal power flow on an unbalanced radial network,” in Proc. 54th IEEE Conf. Decision Control, 2015, pp. 6915–6920. SMagnusson2020distributed S. Magnússon, G. Qu and N. Li, “Distributed optimal voltage control with asynchronous and delayed communication," IEEE Trans. Smart Grid, vol. 11, no. 4, pp. 3469–3482, 2020. NPatari2022distributed N. Patari, A. Srivastava and N. Li, “Distributed optimal voltage control considering latency and asynchronous communication for three phase unbalanced distribution systems," IEEE Trans. Power Syst., 2022. dall2016optimal E. Dall’Anese and A. Simonetto, “Optimal power flow pursuit,” IEEE Trans. Smart Grid, vol. 9, no. 2, pp. 942–952, 2018. tang2017distributed Y. Tang and S. Low, “Distributed algorithm for time-varying optimal power flow,” in Proc. 56th IEEE Conf. Decision Control, 2017, pp. 3264–3270. zhou2019hierarchical X. Zhou, Z. Liu, W. Wang, C. Zhao, F. Ding, and L. Chen, “Hierarchical distributed voltage regulation in networked autonomous grids,” in Proc. Amer. Control Conf., 2019, pp. 5563–5569. zhou2019accelerated X. Zhou, Z. Liu, C. Zhao, and L. Chen, “Accelerated voltage regulation in multi-phase distribution networks based on hierarchical distributed algorithm,” IEEE Trans. Power Syst., vol. 35, no. 3, pp. 2047–2058, 2019. gan2016online L. Gan and S. H. Low, “An online gradient algorithm for optimal power flow on radial networks,” IEEE J. Sel. Areas Commun., vol. 34, no. 3, pp. 625–638, 2016. bernstein2019real A. Bernstein and E. Dall’Anese, “Real-time feedback-based optimization of distribution grids: A unified approach,” IEEE Trans. Control Newt. Syst., vol. 6, no. 3, pp. 1197–1209, 2019. bolognani2014distributed S. Bolognani, R. Carli, G. Cavraro, and S. Zampieri, “Distributed reactive power feedback control for voltage regulation and loss minimization,” IEEE Trans. Autom. Control, vol. 60, no. 4, pp. 966–981, 2015. liang2022hierarchical H. Liang, X. Zhou, and C. Zhao, “Hierarchical optimal power flow with improved gradient evaluation,” in Proc. Amer. Control Conf., 2022, pp. 4547–4552. gan2014convex L. Gan and S. H. Low, “Convex relaxations and linear approximation for optimal power flow in multiphase radial networks,” in Proc. Power Syst. Comput. Conf., 2014, pp. 1–9. zhao2017optimal C. Zhao, E. Dall-Anese, and S. H. Low, “Optimal power flow in multiphase radial networks with delta connections,” in Proc. IREP Bulk Power Syst. Dyn. Control Symp., 2017, pp. 1–7. cwang2018explicit C. Wang, A. Bernstein, J. -Y. Le Boudec and M. Paolone, “Explicit conditions on existence and uniqueness of load-flow solutions in distribution networks," IEEE Trans. SmartGrid, vol. 9, no. 2, pp. 953–962, 2018. koshal2011multiuser J. Koshal, A. Nedić, and U. V. Shanbhag, “Multiuser optimization: Distributed algorithms and error analysis,” SIAM J. Optim., vol. 21, no. 3, pp. 1046–1081, 2011. tang2018feedback Y. Tang, E. Dall'Anese, A. Bernstein and S. H. Low, “A feedback-based regularized primal-dual gradient method for time-varying nonconvex optimization," in Proc. IEEE Conf. Decis. Control, 2018, pp. 3244–3250. tang2017realtime Y. Tang, K. Dvijotham and S. Low, “Real-Time Optimal Power Flow," IEEE Trans. Smart Grid, vol. 8, no. 6, pp. 2963–2973, Nov. 2017. zhou2020multi X. Zhou, Y. Chen, Z. Liu, C. Zhao, and L. Chen, “Multi-level optimal power flow solver in large distribution networks,” in Proc. IEEE Int. Conf. Smart Grid Communications, 2020, pp. 1–6. jha2021network R. R. Jha and A. Dubey, “Network-level optimization for unbalanced power distribution system: Approximation and relaxation,” IEEE Trans. Power Syst., vol. 36, no. 5, pp. 4126–4139, 2021. tang2022running Y. Tang, E. Dall'Anese, A. Bernstein and S. H. Low. “Running primal-dual gradient method for time-varying nonconvex problems,” SIAM J. Control Optim., 60(4), 1970–1990. jfsturn1999sedumi J. F. Sturm, “ Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones,” Optim. Methods Softw., vol. 11, no. 1-4, pp. 625–653, 1999. gan2015exact L. Gan, N. Li, U. Topcu and S. H. Low, “Exact convex relaxation of optimal power flow in radial networks," IEEE Trans. Autom. Control, vol. 60, no. 1, pp. 72–87, 2015.
http://arxiv.org/abs/2306.10888v2
20230619123600
Transcorrelated selected configuration interaction in a bi-orthonormal basis and a cheap three-body correlation factor
[ "Abdallah Ammar", "Anthony Scemama", "Emmanuel Giner" ]
physics.chem-ph
[ "physics.chem-ph" ]
[email protected] [email protected] Laboratoire de Chimie Théorique, Sorbonne Université and CNRS, F-75005 Paris, France In this work, we develop a mathematical framework for a Selected Configuration Interaction (SCI) algorithm within a basis for transcorrelated (TC) calculations. The basis used here serves as the equivalent of the standard Hartree Fock (HF) orbitals. However, within the context of TC, it leads to distinct orbitals for the left and right vectors. Our findings indicate that the use of such a basis allows for a proper definition of the frozen core approximation. In contrast, the use of HF orbitals results in bad error cancellations for ionization potentials and atomization energies (AE). Compared to HF orbitals, the optimized basis significantly reduces the positive part of the second-order energy (PT2), thereby facilitating the use of standard extrapolation techniques of hermitian SCI. While we did not observe a significant improvement in the convergence of the SCI algorithm, this is largely due to the use in the present work of a simple three-body correlation factor introduced in a recent study. This correlation factor, which depends only on atomic parameters, eliminates the need for re-optimization of the correlation factor for molecular systems, making its use straightforward and user-friendly. Despite the simplicity of this correlation factor, we were able to achieve accurate results on the AE of a series of 14 molecules in a triple-zeta basis. We also successfully broke a double bond until the full dissociation limit while maintaining the size consistency property. This work thus demonstrates the potential of the BiO-TC-SCI approach in handling complex molecular systems. Transcorrelated selected configuration interaction in a bi-orthonormal basis and a cheap three-body correlation factor Emmanuel Giner 19 June 2023 ====================================================================================================================== FCIFull Configuration Interaction FROGGfrozen Gaussian geminal CIConfiguration Interaction SCISelected CI ENEpstein Nesbet QMCQuantum Monte Carlo AOAtomic Orbital MOMolecular Orbital HFHartree-Fock CASComplete Active Space VMCVariational Monte Carlo DMCDiffusion Monte Carlo TCTranscorrelated CASSCFcomplete active space self consistent field fcfrozen core FCIQMCFull Configuration Interaction Quantum Monte Carlo SCFSelf Consistent Field RHFRestricted Hartree-Fock UHFUnrestricted Hartree-Fock ROHFRestricted Open-shell Hartree-Fock DIISDirect Inversion in the Iterative Subspace LSLevel-Shifting MPMøller-Plesset MPPT2[MP2]Møller-Plesset Perturbation Theory at second order BiOBi-Orthogonal OrtOrthogonal MPSMatrix Product State MBPT2[MBPT(2)]Many Body Perturbation Theory to the second order LCCSDLinearized Coupled Cluster Singles and Doubles DMRGDensity Matrix Renormalization Group CCCoupled Cluster DFTDensity Functional Theory WFTWave Function Theory VarVariational CBSComplete Basis Set IPIonization Potential AEAtomization Energy § INTRODUCTION In the expansive field of electronic structure calculations, the TC method Boys_Handy_1969_determination offers an appealing path toward an accurate description of atomic and molecular systems. The TC framework uses a linear combination of Slater determinants multiplied by a correlation factor, thus enabling the inclusion of correlation effects with both real space space and orbital space representations. The specificity of the TC methodology is its explicit incorporation of the correlation factor's effects into the Hamiltonian via a similarity transformation Hirschfelder-JCP-63, Boys_Handy_1969_determination. This yields a non-Hermitian TC Hamiltonian, which although forbids the use of the variational principle for wave function optimization, grants the capacity to make the wave function's expression more compact, and accelerates the convergence toward the complete basis set limit. Furthermore, the effective interaction within the TC Hamiltonian is limited to three-body terms at most, which facilitates the deterministic calculation of all integrals necessary for the optimization of the wave function. This characteristic eliminates the requirement for stochastic sampling of the N-body integrals, a process typically seen in VMC methods. There are two distinct but connected aspects involved in the optimization of the wave function within the TC framework: i) the optimization of the correlation factor used to perform the similarity transformation and ii) the optimization of the Slater-determinant part of the wave function. The role of the correlation factor is quite simple to understand: it fundamentally involves decreasing the probability of finding two electrons at small inter-electronic distances. However, there exists a wide variety of functional forms for the correlation factor which can be broadly divided into two categories: universal two-body correlation factors and three-body correlation factors. Universal correlation factors dependent solely on inter-electronic coordinates and create an homogeneous and isotropic correlation hole throughout the whole space. In contrast, three-body correlation factors present a richer parametrisation, enabling for instance the adaptation of the correlation hole's depth and extension based on the distance of an electron pair from a specific nucleus. The FROGG, proposed by Ten-No TenNo_2000_feasible, was optimized for valence electron pairs and can be seen as the prototype for the universal correlation factor. An alternative universal correlation factor was derived from the range-separated density functional theory Gin-JCP-21, which depends on a single parameter μ that tailors the shape of the correlation hole. The utilization of a universal correlation factor is appealing due to its minimal optimization requirements. However, it is crucial to note that when enforcing a correlation hole designed for valence electrons in high-density regions, there is a significant demand for flexibility in the wave function, enough to adapt the density in the core regions. Conversely, three-body correlation factors (for instance, see Refs. Boys_Handy_1969_calculation,MosSchLeeKal-JCP-82-Monte,SchMos-JCP-90,UmrWilWil-PRL-88,MosSch-JCP-92,MusNig-JCP-94,HuaUmrNig-JCP-97,GucJeoUmrJai-APS-05,LopSetDruNee-APS-12,AusZubLes-CR-12,LucStuSchHag-JCP-2015 and references therein) employ explicit electron-electron-nucleus coordinates, allowing for the adjustment of the correlation hole's depth and spatial extension according to the system's density, albeit at the expense of extensive optimization. Recently, we introduced a minimal three-body correlation factor <cit.> which necessitates limited optimization. The performance of this correlation factor will be investigated here. When it comes to optimizing the Slater-determinant part of the TC wave function, it is important to note that the TC Hamiltonian, as any N-body operator, can be expressed in second quantization. The latter implies that any conventional wave function ansatz from quantum chemistry can be adapted to fit within the TC framework. However, the bi-orthonormal nature of the left and right eigenvectors in the TC Hamiltonian suggests potential change of strategy to choose the basis. Specifically, the latter could be a pair of bi-orthonormal one-electron basis sets to expand both the left and right eigenvectors. This is in contrast to the standard practice of using a single orthonormal one-electron basis set. Furthermore, replacing a single set of orbitals with two distinct and optimizable sets increases the flexibility of both left and right eigenvectors. This change leads to a decrease in the error associated with the TC energy, which is directly related to the product of the errors in the left and right eigenvectors. Constraining these two sets to form a couple of bi-orthonormal basis helps in defining creation and annihilation operators. These operators fulfill the anti-commutation rules of standard fermionic operators in second-quantized formulations, simplifying the process of adapting any conventional wave function method to the TC framework. Another important aspect of the TC framework is that the variational principle does not hold anymore due to the non hermitian nature of the TC Hamiltonian. One can nevertheless substitute the energy minimization by the so-called bi variational functional <cit.>. The latter involves finding stationary points of an energy functional depending on both a left and right function. As an example, the optimization of the energy with respect to orbital parameters at the single determinant level necessary leads to two sets of bi orthonormal orbitals adapted for either the left or right eigenvectors. Taking these peculiar aspects into consideration, we distinguish here between formalisms that use left and right orbitals (termed frameworks) and those that employ a common set of orbitals (denoted as formalisms). Historically, an framework utilizing a single Slater determinant was often employed <cit.>. Fimple and Unwin's pioneering work <cit.> introduced a CI expansion within a framework, illustrated on the simple case of the ground state of the Helium atom. Subsequently, the formalism was significantly expanded by Ten-No et al with MPPT2 and LCCSD using the FROGG correlation factor in the early 2000s  <cit.>. These advances, coupled with efforts on the three-body terms <cit.>, enabled calculations on small organic molecules. Recent developments using an framework were proposed by several groups using FCIQMC <cit.>, CC <cit.>, MPS <cit.> or CI <cit.>. Periodic systems also saw applications using either a single Slater determinant, MPPT2 , CI or CC within the framework  <cit.>, and later developments using a framework were proposed <cit.>. In one of our previous work <cit.>, we introduced SCI for the TC framework using a single set of orthonormal molecular orbitals within a bi-variational scheme. We observed that enhancing the quality of both the left and right wave functions stabilizes the computation of the second-order perturbative correction. The latter is crucial as it makes the SCI approach competitive by significantly improving the convergence rate of calculations and enabling extrapolation techniques <cit.>. The quality of the MO is a critical aspect for the practical application of SCI. Specifically, the optimization of orbitals in SCI allows for a more compact CI expansion, either through variationally optimized orbitals <cit.> or state-average natural orbitals for excited state excitation energies <cit.>. We would also like to stress the importance of orbital optimization, especially in the context of one of SCI's primary applications: achieving near CASSCF with a large active space <cit.>. When considering orbital optimization in a bi-variational framework, the framework is a natural choice. Hence, the present article aims to further develop SCI within a fully bi-orthonormal and bi-variational framework, i.e. with different left and right molecular orbitals. In this work, we use a linearized version of the recently developed cheap three-body correlation factor <cit.>. However, we want to highlight that the BiO-SCI proposed here can also be applied with more sophisticated correlation factors, leading to improved convergence properties of the present method. The structure of the article is the following. Section <ref> provides a brief overview of the TC framework, then Section <ref> gives a summary of the bi-variational principle, which is formalized in framework in Section <ref> and then applied to the SCI algorithm in Section <ref>. We give the explicit form of the correlation factor used here in Section <ref>. We present the results in Section <ref>, which begins with an investigation of the benefit of using a framework for SCI in Section <ref> specially regarding the fc approximation which is fundamental in view to apply the TC framework to large systems. Then, we illustrate the convergence of the present BiO-TC-SCI algorithm in Section <ref> on the F_2, N_2 and CO molecules, and show that the usual linear extrapolation is possible within our framework. We continue our study by computing the dissociation curve of the CO molecule, which allows us to test the present framework in different regimes of correlation together with the size consistency property. We conclude the present work by computing the atomization energies of a set of 14 molecules in increasing basis sets and compare with both the usual SCI algorithm and estimated exact results. We observe that in most of the case, the accuracy of a calculation in the cc-pVTZ basis set is within 1 kcal/mol with respect to the estimated exact results. We emphasize that no optimization of the correlation factor was needed for molecular systems thanks to the specific form of the correlation factor used here <cit.>. Eventually, we summarize the main results in Section <ref>. § THEORY §.§ Basics of the transcorrelated formalism The general form of the transcorrelated Hamiltonian for a symmetric correlation factor 12 is given by ≡ e^-Ĥ e^ = Ĥ + [ Ĥ,] + 1/2[ [Ĥ,],], where = ∑_i<ju(i,j) and Ĥ = ∑_i -1/2∇^2_i + v(_i) + ∑_i<j 1/r_ij. Eq. (<ref>) leads to the following transcorrelated Hamiltonian = Ĥ - ∑_i<jij - ∑_i<j<kijk, where the effective two- and three-body operators 12 and 123 are defined as 12 = 1/2( Δ_1 12 + Δ_2 12 + (∇_1 12) ^2 + (∇_2 12) ^2 ) + ∇_1 12·∇_1 + ∇_2 12·∇_2, and 123 = ∇_1 12·∇_1 13 + ∇_2 21·∇_2 23 + ∇_3 31·∇_3 32 . As apparent from the definition of Eq.(<ref>), is not Hermitian as = e^+Ĥ e^-, and a given eigenvalue is associated with a couple of right and left eigenvectors = = . Nevertheless, thanks to the property of similarity transformations, the spectrum of coincides with that of the usual Hamiltonian. From thereon, we no longer include the explicit dependence on the correlation factor u, and instead, we will use , |Χ_i⟩, and |Φ_i⟩ to represent the TC Hamiltonian and its corresponding left and right eigenvectors. §.§ Non-Hermitian eigenvalue problems and the bi-variational principle Due to the non-Hermitian nature of the TC Hamiltonian, the standard energy minimization approach cannot be used to optimize wavefunctions since the variational principle does not apply. Instead, the search for an energy minimum over a wavefunction Ψ can be replaced by the search for a stationary point of a functional that depends on two wavefunctions Χ and Φ = ΧΦ/⟨Χ|Φ⟩, and Χ and Φ are often referred to as the left and right wavefunctions, respectively. An eigenvalue is obtained when either the left or right function is an eigenfunction. = Χ/⟨Χ|⟩ = ∀Χ, = Χ_iΦ/⟨Χ_i|Φ⟩ = ∀Φ. Hence, finding and the corresponding left and right eigenvectors is equivalent to nullifying the right and left functional derivatives Χ = 0 ∀ Χ, Φ = 0 ∀ Φ, which have general forms given by Χ = ( Φ) ⟨Χ|Φ⟩ - ΧΦΦ/|⟨Χ|Φ⟩|^2, Φ = ( ^†Χ) ⟨Χ|Φ⟩ - ΧΦΧ/|⟨Χ|Φ⟩|^2. This is the so-called bi-variational principle <cit.>. It is worth highlighting that setting the left functional derivative to zero for all left wavefunctions Χ enables the determination of the optimal right wavefunction (and vice versa). §.§ Transcorrelation in a bi-orthonormal framework In this section we briefly summarize how the TC Hamiltonian is written in a bi-orthonormal basis (see Sec. <ref>) and also recall the mathematical framework together with the self-consistent field equations used to obtain the bi-orthonormal basis (see Sec. <ref>). §.§.§ Bi-orthonormal framework and second quantization In practice, the TC Hamiltonian is projected into a one-particle basis set = P̂^P̂^, where P̂^ is the projector onto the Hilbert space spanned by the one-particle basis set . Because of the properties of the similarity transformation the exact eigenvalue E_i is recovered in the CBS limit lim_→CBS = E_i, and given that some of the correlation effects are accounted for by the correlation factor, we can anticipate a faster convergence of compared to wave function methods based on the standard Hamiltonian. For the sake of simplicity in notation, we will henceforth drop the exponent and refer directly to as . In a second quantization framework, rather than using a standard basis set consisting of real-valued orthonormal spatial MO ϕ_i() to express the operator, a more flexible approach involves using distinct MO for the left and right functions. The left wave functions are expanded on the set of left real-valued orbitals ={χ_i(),i=1,…,n } while the right wave functions are expanded on the set of right real-valued orbitals ={ϕ_i(),i=1,…,n }. Similar to the usual orthonormal framework, if the two bases and are chosen to satisfy the biorthonormal relation ⟨χ_i|ϕ_j⟩ = δ_ij, one can build creation operators k,σ and annihilation operators l,λ (where k and l are labels of spin-free orbitals, and σ and λ are labels of spins) that satisfy the usual anticommutation relations <cit.> [k,σ,l,λ]_+ = 0, [k,σ,l,λ]_+ = 0, [k,σ,l,λ]_+ = δ_klδ_σλ, and avoid the complications introduced by overlap integrals inherent to the use of non-orthonormal basis functions <cit.>. As a result, expressing an operator in second quantization using the biorthonormal bases and involves two straightforward rules: (i) replace the conventional creation and annihilation operators k,σ and k,σ with the biorthonormal basis creation and annihilation operators k,σ and k,σ, and (ii) write the integrals of the operator using the functions χ_i in the bra and ϕ_j in the ket. Following these rules the operator can be written in a second-quantized form on the two bi-orthonormal basis sets as follows = ∑_k ∈∑_i ∈∑_σ∈{↑,↓} h_ki k,σ i,σ + 1/2∑_k,l ∈∑_i,j ∈∑_σ,λ∈{↑,↓}( V_ij^kl - ) k,σ l,λ j,λ i,σ - 1/6∑_k,l,n ∈∑_i,j,m ∈∑_σ,λ,κ∈{↑,↓} k,σ l,λ n,κ m,κ j,λ i,σ, where h_ki and V_ij^kl are the integrals of the usual one- and two-electron operators, respectively, expressed in the bi-orthonormal basis, h_ki = ∫d χ_k() ĥ ϕ_i(), = ∫d1 d2 χ_k(1) χ_l(2) 1/r_12 ϕ_i(1) ϕ_j(2), and and are the two- and three-electron integrals corresponding to the effective two- and three-body operators 12 and 123 expressed in the bi-orthonormal basis = ∫d1 d2 χ_k(1) χ_l(2) 12 ϕ_i(1) ϕ_j(2), = ∫d1 d2 d 3 χ_k(1) χ_l(2) χ_n(3) 123 ϕ_i(1) ϕ_j(2) ϕ_m(3). An important consequence of the bi-orthonormal framework is the one-to-one correspondence between the left and right Slater determinants even if they are built with different functions. To illustrate this, let us consider a left Slater determinant ⟨Χ_I| = ∏_i∈𝒮_Χ_I^N ⟨0|i where 𝒮_Χ_I = { k } is the ordered list of indices of left orbitals χ_k() occupied in ⟨Χ_I|. In a similar way, let us consider a right Slater determinant |Φ_J⟩ = ∏_i∈𝒮_Φ_J^N i|0⟩ where 𝒮_Φ_J = { l } is the ordered list of indices of right orbitals ϕ_l() occupied in |Φ_J⟩. Because of the bi-orthonormality relation, these two Slater determinants are orthogonal only if the two sets 𝒮_Χ_I and 𝒮_Φ_J are different ⟨Χ_I|Φ_J⟩ = 0 if 𝒮_Χ_I𝒮_Φ_J , = 1 if 𝒮_Χ_I = 𝒮_Φ_J . As a consequence, to each right Slater determinant |Φ_I⟩ one can associate a unique left Slater determinant |Χ_I⟩ with an identical list of indices of occupied orbitals, although the orbitals composing these two Slater determinants are themselves different. §.§.§ Optimization of bi-orthonormal orbitals in the TC framework The selection of orbitals used to expand the wave function is a crucial factor in enhancing the accuracy of any approximated wave function ansatz. In the Hermitian case, the starting point is often the set of HF orbitals, as they minimize the energy of a single Slater determinant wave function. However, in the case of the TC Hamiltonian, which is a non-Hermitian operator, the expectation value of a single Slater determinant cannot be minimized. Instead, one can seek a stationary point of the functional , where both Χ and Φ are single-determinant wave functions. It's important to note that these wave functions are not necessarily built with the same orbitals. Using reference determinants Χ_0 and Φ_0 constructed on two bi-orthonormal basis sets, and respectively, and with the same orbital occupancy, we can write the left and right single Slater determinant wave function as follows |Φ[κ̂]⟩ = e^κ̂|Φ_0⟩, |Χ[κ̂]⟩ = e^κ̂|Χ_0⟩ . Here κ̂ are anti hermitian orbital rotation operators κ̂ = ∑_p>qκ_pqÊ_pq, with the operators Ê_pq being defined as Ê_pq = ∑_σ∈{↑,↓}p,σq,σ, and where the κ_pq are the orbital rotation parameters forming the matrix κ. The fact that we use a single set of orbital parameters κ even if we optimize two distinct functions is due to the bi-orthonormal condition which ensures ⟨Χ[κ̂]|Φ[κ̂]⟩ = Χ_0e^-κ̂e^κ̂Φ_0 =⟨Χ_0|Φ_0⟩ =1. In other words, the bi orthonormal condition imposes that the left orbitals are related to the right orbitals through the matrix 𝐒^-1, where 𝐒 is the overlap matrix between the right orbitals. Another point is that the matrix of the orbital parameters κ is no longer anti hermitian but fulfills a pseudo anti hermitian relation (κ)^† = - 𝐒κ𝐒^-1. The equivalent of the standard energy minimization condition in the HF framework translates within the TC context in seeking a stationary point of the following functional: Ẽ[κ̂] = Χ_0e^-κ̂ e^κ̂Φ_0/Χ_0e^-κ̂e^κ̂Φ_0. and, using Eq. (<ref>), the functional Ẽ[κ̂] can be written up to first-order in κ̂ as Ẽ[κ̂] = Χ_0Φ_0 - Χ_0κ̂Φ_0 + Χ_0κ̂Φ_0 + o(|κ̂|^2). Differentiating Eq. (<ref>) with respect to κ_pq and evaluating at κ=0 leads to Ẽ[κ̂]κ_pq|_κ=0 = Χ_0Ê_pqΦ_0 - Χ_0Ê_pqΦ_0. We assume here closed shell determinants and label the occupied and virtual orbitals by i and a, respectively. By noticing that ⟨Χ_0|Ê_ai= 0 = Ê_ia|Φ_0⟩, canceling the derivative with respect to occupied-virtual orbital rotation parameters leads to Ẽ[κ̂]κ_ia|_κ=0 = 0 ⇔ Χ_0Ê_iaΦ_0 = 0, and Ẽ[κ̂]κ_ai|_κ=0 = 0 ⇔ Χ_0Ê_aiΦ_0 = 0, which give the left and right Brillouin conditions for the set of orbitals composing |Χ_0⟩ and |Φ_0⟩. Of course, all this derivation is trivially extendable to the open shell case. Just like in the Hermitian case, these Brillouin conditions can be fulfilled by iteratively diagonalizing a Fock-like operator. Within the TC framework, the corresponding Fock operator is non-Hermitian and constructed not with a standard density, but with a transition density between the left and right functions. Using matrix notations, the transition density 𝐃 can be written in the AO basis as 𝐃 = 𝐂_L^†𝐂_R, where 𝐂_L (𝐂_R) represents the matrix of coefficients of occupied left (right) orbitals on the AO basis. The interested reader can look for instance in Ref. AmmSceGin-JCTC-2023 where an explicit form of the Fock operator is given. §.§ Development of selected CI in a bi-orthonormal framework In this section we describe the theoretical background of the TC-SCI in the bi-orthonormal framework. This requires first the development of a Rayleigh-Schrödinger perturbation theory using the bi-variational principle as the starting point and adapting it to a bi-orthonormal basis (Sec. <ref>). We then give the explicit algorithm used for our BiO-TC-SCI algorithm in Sec. <ref>, together with the technical details in Sec. <ref>. §.§.§ Perturbation theory of the functional Following Ref. AmmSceGin-JCP-22 we give the perturbation expansion of the functional for the ground-state energy. Here, the derivation is adapted to the bi-orthonormal framework. Let the Hamiltonian be split into = + λ. We assume that the left and right eigenvectors of are known, |Φ_i⟩ = ϵ_i |Φ_i⟩ ^†|Χ_i⟩ = ϵ_i |Χ_i⟩, and that they form a bi-orthonormal set ⟨Χ_i|Φ_j⟩ = δ_ij. In the case of a SCI theory within an EN<cit.> partitioning, |Χ_0⟩ and |Φ_0⟩ are the ground state left and right eigenvectors of (which can be multi configurational) while the |Χ_i⟩ and |Φ_i⟩ for i>0 are Slater determinants such that ⟨Χ_0|Φ_i⟩ = ⟨Χ_i|Φ_0⟩=0. The ground-state energy can be obtained by evaluating , i.e. evaluating the functional at the right ground-state wave function, but the choice of Χ remains. Based on our previous numerical study<cit.>, we choose |Χ⟩=|Χ_0⟩, i.e. the left eigenfunction of . The functional then reads = Χ_0 + λ/⟨Χ_0|⟩. We now expand the ground-state energy in powers of λ = ∑_k=0^∞λ^k Ẽ^(k), which therefore implies to also expand the right eigenvector in powers of λ |⟩ = ∑_k=0^∞λ^k |Φ^(k)⟩, where |Φ^(k)⟩ is the correction to the right ground state wave function at the order k which is expanded in the basis of the right eigenvectors of assuming intermediate bi-orthonormalization |Φ^(k)⟩ = ∑_i c_i^(k)|Φ_i⟩, ⟨Χ_0|Φ^(k)⟩ = 0 if k 0, which implies, because of the bi-orthonormality property, that ⟨Χ_0|⟩=1. Truncating up to second order Eq. (<ref>) leads to = Ẽ^(0) + λẼ^(1) + λ^2 Ẽ^(2), which then yields Ẽ^(0) = Χ_0Φ^(0), Ẽ^(1) = Χ_0Φ^(0) , Ẽ^(2) = Χ_0Φ^(1) . To obtain the equation for the perturbed wave function one replaces the expressions of both |⟩ and in Eq. (<ref>) and for Φ^(1), which then leads to |Φ^(1)⟩ + |Φ^(0)⟩ = Ẽ^(0)|Φ^(1)⟩ + Ẽ^(1)|Φ^(0)⟩. By projecting Eq. (<ref>) on a function |Χ_i⟩ one obtains the expression of the coefficient of the right function at first order c_i^(1) = Χ_iΦ^(0)/Ẽ^(0) - ϵ_i, and therefore one can obtain the second order contribution to the energy Ẽ^(2) = ∑_i=1^N Ẽ^(2)_i, where Ẽ^(2)_i is the contribution at second order to the energy of the function |Φ_i⟩ Ẽ^(2)_i = Χ_0Φ_i c_i^(1) = Χ_0Φ_iΧ_iΦ^(0)/Ẽ^(0) - ϵ_i. With respect to the standard Hermitian case, one can notice here several differences in Eqs. (<ref>) and (<ref>): i) The first-order coefficient c_i^(1) is computed using Χ_iΦ^(0), implying therefore the use of the left function Χ_i satisfying ⟨Χ_i|Φ_i⟩=1. This is a consequence of the bi-orthonormal framework, as in the case of an orthonormal framework, the function Χ_i would be simply equal to Φ_i. ii) The computation of the energy implies in the general case, through ΧΦ_i, the use of a left function Χ, that we chose here to be |Χ_0⟩. One could also expand in perturbation the left eigenvector |Χ_gs⟩, and evaluate the functional at the right function |Φ_0⟩. In that case, one would obtain exactly the same expansion for the energy up to second-order. §.§.§ Selected CI algorithm in a bi-orthonormal framework In Ref. AmmSceGin-JCP-22 we investigated the various flavours of TC-SCI using an orthonormal framework. Among the different choices of selection criteria tested, we found that the one based on the second-order contribution to the energy using both the left and right eigenvectors of as zeroth order wave function was the most efficient because it allows to improve both the left and right eigenvectors. We therefore follow a similar path here, using an EN zeroth-order Hamiltonian, and present our bi-orthonormal TC-SCI (BiO-TC-SCI) algorithm which, at an iteration n, can be summarized as follows. * A given zeroth order set of right Slater determinants 𝒫_r^n={|Φ_⟩, =1,…,N_det} is known, and therefore its associated left zeroth order set is also known 𝒫_l^n={|Χ_⟩, =1,…,N_det}. One obtains then the ground state left and right eigenvectors of the TC-Hamiltonian within 𝒫_l^n and 𝒫_r^n ^†|Χ^(0)⟩ = Ẽ^(0)|Χ^(0)⟩, |Φ^(0)⟩ = Ẽ^(0)|Φ^(0)⟩, with |Χ^(0)⟩ = ∑_ ∈ 𝒫_l^n d_^(0)|Χ_⟩, |Φ^(0)⟩ = ∑_ ∈ 𝒫_r^n c_^(0)|Φ_⟩. * For each Slater determinant |Φ_⟩∉𝒫_r^n, estimate its importance thanks to its contribution to the energy at second-order using the EN zeroth-order Hamiltonian Ẽ^(2)_ = Χ^(0)Φ_Χ_Φ^(0)/Ẽ^(0) - ϵ_, where ϵ_ = Χ_Φ_. We also compute on the fly the total second-order contribution to the energy Ẽ^(2) = ∑_∉𝒫_r^n Ẽ^(2)_, and estimate the energy as = Ẽ^(0) + Ẽ^(2). * Select the set of N_Φ_ right Slater determinants {|Φ_⟩}, labelled 𝒜_r^n, with the largest |Ẽ^(2)_|. This automatically defines the corresponding set of left Slater determinants 𝒜_l^n. * Add the set 𝒜_r^n to 𝒫_r^n and 𝒜_l^n to 𝒫_l^n to define the new set of both left and right Slater determinants of the zeroth order space 𝒫_r^n+1 = 𝒫_r^n ∪𝒜_r^n, 𝒫_l^n+1 = 𝒫_l^n ∪𝒜_l^n. * Go back to step 1 and iterate until a given convergence criterion is reached. §.§.§ Technical details about the BiO-TC-SCI algorithm The computation of TC Hamiltonian matrix elements for the diagonalization step of Eq.(<ref>) is done within the so-called 5-idx approximation introduced in Ref. DobCohAlaGin-JCP-22 which consists in neglecting the pure triple excitation terms in the three-body operator, i.e. the terms involving integrals with six distinct indices in Eq.(<ref>). In addition, when selecting a given Slater determinant we automatically include all other Slater determinants belonging to the configuration space functions (CSF) in which the determinant is involved, such that pure spin states are obtained after diagonalization. The diagonalization is made using two distinct Davidson procedures: one for the left and one for the right eigenvectors, following Ref. HirNak-JComP-82. As the computation of Ẽ^(2) is costly, we neglect all contributions from the three-electron operator in Eqs. (<ref>) and (<ref>), and also adapt the stochastic version proposed in Ref. GarSceLooCaf-JCP-17. In the original algorithm, external determinants are organized in batches generated from a determinant of the internal space. The batch generated by |I⟩ is drawn with a probability |c_I^(0)|^2 / ∑_J |c_J^(0)|^2. In the present work, the batches are drawn with a probability |d_I^(0) c_I^(0)| / ∑_J |d_J^(0) c_J^(0)|. Finally, we use the extrapolation technique of Ref. HolUmrSha-JCP-17 which allows to estimate the TC-FCI energy as the zeroth-order energy Ẽ^(0) obtained for a vanishing Ẽ^(2). §.§ A simple three-body correlation factor for frozen-core calculations Although the BiO-TC-SCI algorithm presented here is applicable to any form of correlation factor, we focus on a linearized version of the simple three-body correlation factor developed in Ref. AmmSceGin-JCTC-2023. The correlation factor reads 12 = 1212, where 12 is the one-parameter correlation factor introduced in Ref. Gin-JCP-21, 12 = 1/2r_12 ( 1 - erf(μ r_12) ) - 1/2√(π)μe^-(μ r_12)^2, and the envelope = 1 - ∑_m=1^exp( - α_m | -m|^2 ), is the linearized version of the function introduced in Ref. AmmSceGin-JCTC-2023, with m the position of the m-th nucleus in the system. The envelope plays the role of a damping function which suppresses the effect of the correlation factor 12 near each nucleus. The parameters α_m control the typical range on which the correlation factor 12 is killed by the envelope 12 around the nucleus located at m. The advantages of this relatively simple correlation factor are that, as shown in Ref. AmmSceGin-JCTC-2023, i) an efficient analytical-numerical scheme can be used to obtain the integrals and , ii) provided that a typical valence value of the μ parameter is given (μ=0.87), the correlation factor has only one parameter for each nucleus in the molecule, iii) the parameters α_m are transferable, and can be optimized only for the isolated atoms. § RESULTS §.§ Computational details The BiO-TC-SCI code, with all required integrals, has been implemented in the Quantum Package software <cit.>. The computation of integrals is based on a mixed analytical-numerical integration scheme, as detailed in Ref. AmmSceGin-JCTC-2023. All computations are carried out using the cc-pVXZ Dunning family of basis sets. The TC calculations have been performed with the correlation factor as detailed in Sec. <ref>. For each atom with a nuclear charge Z_m, the corresponding nuclear parameter α_m is determined as the minimum of a VMC calculation for that atom using a single Slater determinant. This determinant is constructed with orbitals that are the right eigenvalues of the TC-Fock operator in the cc-pVTZ basis set (for more details, see Ref. AmmSceGin-JCTC-2023). The only exception is the hydrogen atom, for which we take α_H=∞, as it has no core electrons. We adhere to the strategy of having a unique nuclear parameter α_m for each atom, regardless of the basis set used, and we do not re-optimize these parameters in the molecular systems. This approach results in a correlation factor that does not require any system-specific optimization. The parameters α_m used in this study are presented in Table <ref>. All the BiO-TC-SCI energies are calculated using the extrapolation scheme described in Sec.  <ref>. Unless explicitly stated otherwise, all calculations are performed within the frozen core approximation with a [He] core. §.§ Benefits of orbital optimization: freezing core orbitals and linear extrapolation Core electrons have an important property: they contribute minimally to most chemically relevant energy differences, such as IP or AE. This property is leveraged in virtually all post-HF methods by using the fc approximation, significantly reducing the computational cost, especially when using strong scaling methods like SCI or CC. However, this technique depends on the optimization of core orbitals such that i) excitations of electrons from these orbitals to valence orbitals have a negligible weight in the wave functions of the low energy part of the spectrum, and ii) the core-core and core-valence correlation energy are essentially transferable from atoms to molecules. The most straightforward way to achieve this decoupling between core and valence orbitals is to obtain eigenvectors of the Fock matrix, i.e., perform a canonical HF optimization. Transitioning to the context of the TC Hamiltonian, given that the effective interaction is no longer the Coulomb interaction, there is no reason for the usual core electrons HF orbitals to achieve the decoupling necessary for the fc approximation. A logical approach is then to perform the equivalent of the HF orbital optimization adapted for the TC Hamiltonian. This results in a bi-variational and framework as described in Sec. <ref>. To examine the impact of the quality of core orbitals used in TC calculations on typical valence energy differences, we computed the IP of oxygen, fluorine, and neon atoms, along with the AE of the F_2 molecule. These computations were performed in the cc-pVTZ basis set, with or without the fc approximation, using conventional SCI, or the TC-SCI with either RHF orbitals or orbitals. The results of these computations are presented in Table <ref>. As can be seen from Table <ref>, the fc approximation affects both the IP and AE by a few tenths of mH when using the standard SCI. This is typically what is expected for such an energy difference driven by valence properties. In the context of the TC-SCI framework, a variation of the same order of magnitude is observed when using the orbitals, while variation by an order of magnitude larger is observed when using the RHF orbitals. More importantly, it can be observed that when using RHF orbitals, the variation on the IP with the fc approximation increases with the nuclear charge. It is 1 kcal/mol, 1.4 kcal/mol, and 2.0 kcal/mol for the oxygen, fluorine, and neon atoms, respectively. Also, the impact of the fc approximation on the AE of the F_2 molecule is about 1.4 kcal/mol. However, the all-electron TC calculations using both RHF and orbitals agree within a few tenths of mH. The small deviation of the AE from the RHF and calculations originates from the 5-idx approximation, which does not guarantee strict orbital invariance as part of the Hamiltonian is truncated. Another notable characteristic of the orbitals is that they facilitate a more straightforward linear extrapolation of Ẽ^(0) as a function of Ẽ^(2). The latter is crucial for obtaining reliable estimates of the TC-FCI energy. To illustrate this, we present in Fig. <ref> the variation of Ẽ^(0) as a function of Ẽ^(2) for the all-electron calculation of the F_2 molecule in the cc-pVTZ basis set. As can be seen from Fig. <ref>, the TC-SCI exhibits more linearity when performed using orbitals than when using RHF orbitals. Upon careful examination of the data, it appears that this difference arises from the amount of positive contribution to Ẽ^(2), which is typically ten times larger when using RHF orbitals, as illustrated in Fig. <ref>. These results underscore the importance of optimizing the orbitals within a framework when performing TC calculations. This optimization is key for enabling the fc approximation along with extrapolation techniques, which are essential for the application of the method to large systems and/or active spaces. §.§ Convergence of the BiO-TC-SCI algorithm To illustrate the convergence of the current BiO-TC-SCI algorithm, we present the convergence of Ẽ^(0), Ẽ^(2) and (see Eqs. (<ref>), (<ref>), and (<ref>), respectively) as a function of the number of Slater determinants for the F_2, N_2, and CO molecules in the cc-pVTZ basis set (Figs. <ref>, <ref>, and <ref>, respectively). We also include the extrapolation towards the TC-FCI energy, as is typically done in SCI calculations. For comparison, the convergence of the standard SCI scheme is also reported. From Figs. <ref>, <ref> and <ref>, it clearly appears that the convergence of the BiO-TC-SCI algorithm is at least as fast as in usual SCI algorithms, and that it can also be effectively extrapolated using a linear fitting. When examining the convergence curve of Ẽ^(2) as a function of the number of Slater determinants, it is apparent that |Ẽ^(2)| is consistently smaller than when using the bare Hamiltonian. This is due to the correlation factor already accounting for a portion of the correlation. However, the correlation factor proposed in this study is not re optimised for each molecular situation, and therefore a significant part of the electron-electron correlation need to be represented by the determinantal component of the wave function. This is why the typical effect of wave function compaction is not significantly evident. Employing more advanced forms of Jastrow factors would further reduce |Ẽ^(2)|, leading to a faster convergence of the SCI energy. §.§ Ability to break multiple covalent bonds and size consistency To explore the potential of the current BiO-TC-SCI approach in handling strongly correlated systems, we present in Fig. <ref> the potential energy surface (PES) of the CO molecule, using the cc-pVTZ basis set, up to the full dissociation limit. This is compared with the sum of the TC energies obtained at the same level of calculation for the isolated atomic systems. As can be seen from Fig. <ref>, the PES is smooth across the curve and converges towards the correct asymptotic limit, even for a non-homogeneous system. This is because the extrapolated BiO-TC-SCI achieves near TC-FCI quality, and the three-body correlation factor used in this study is size consistent. Additionally, we incorporate the PES obtained through the standard extrapolated FCI into the same figure. A comparison between the two PESs reveals a more pronounced and deeper well in the curve for the BiO-TC-SCI approach. This result further substantiates the enhanced ability of BiO-TC-SCI to effectively manage the electron correlation effects within the system. §.§ Atomization energies Table <ref> presents the atomization energy results (in mH) for a selection of molecules using the standard extrapolated FCI or the extrapolated TC-FCI. These calculations were performed in various basis sets, specifically cc-pVDZ, cc-pVTZ, and cc-pVQZ for the FCI, and cc-pVDZ and cc-pVTZ for the TC-FCI. In addition to the calculated results, the “estimated exact” column provides an estimation of the exact non-relativistic atomization energies, sourced from Ref. YaoGinLiTouUmr-JCP-20. This serves as a benchmark for the accuracy of the methods used. The molecular geometries were also taken from Ref. YaoGinLiTouUmr-JCP-20. To supplement the data in the table, we provide a visual representation of the FCI () and TC-FCI () results in the cc-pVQZ and cc-pVTZ basis sets, respectively, in Figure <ref>. This aids in interpreting and demonstrating the observed trends. The results reveal the significant benefits of the TC-FCI calculation in terms of convergence and accuracy across a range of molecules. For instance, for the CO_2 molecule, the TC-FCI delivers atomization energies of 622.86 mH and 618.14 mH in the cc-pVDZ and cc-pVTZ basis sets, respectively. These values indicate a convergence towards highly accurate results, closely approximating the estimated exact value of 618.62 mH. Similarly, for O_2, the TC-FCI provides atomization energies of 191.84 mH and 191.75 mH in the cc-pVDZ and cc-pVTZ basis sets, respectively. These values show remarkable alignment with the estimated exact value of 192.0 mH, further emphasizing the accuracy and convergence of the TC-FCI. In contrast, the conventional FCI calculation, even with the larger cc-pVQZ basis set, has difficulty converging to chemical accuracy for many molecules. This is evident in the results observed for various systems, such as C_2, N_2, and HCO, where the TC-FCI consistently surpasses the FCI in terms of accuracy and convergence, particularly in the modest cc-pVTZ basis set. The TC-FCI/cc-pVTZ consistently outperforms the FCI/cc-pVQZ. However, in a few cases such as LiF (with 216.39 mH in the TC-FCI/cc-pVTZ versus 219.85 mH in the FCI/cc-pVQZ), the TC-FCI/cc-pVTZ does not exceed the efficiency of the FCI/cc-pVQZ. This observation could be ascribed to the use of a relatively simple correlation factor. Our future work will aim to explore the application of the TC-FCI with more sophisticated Jastrow factors to further enhance its performance and accuracy. § CONCLUSION In this work, we introduced the theoretical framework of SCI using a set of orbitals within the context of TC calculations. We employed a linearized version of a recently developed inexpensive three-body correlation factor AmmSceGin-JCTC-2023, which eliminates the need for re-optimizing the correlation factor for each molecule and facilitates an efficient analytical/numerical evaluation of the integrals required in TC calculations. After establishing the main equations, we explored various aspects of the current approach numerically. We first examined the benefits of using orbitals in TC calculations by studying the impact of the fc approximation on a set of IP and AE. Our findings indicate that the fc approximation has a similar impact in TC calculations using orbitals as in standard wavefunction techniques (typically a few 0.1 mH), while an order of magnitude greater variation is observed when using the RHF orbitals. These observations support the idea of conducting TC calculations using a framework, as the fc approximation is essential for handling large systems. We then investigated the convergence of the current BiO-TC-SCI framework compared to standard SCI approaches. Our results show that similar convergence can be expected, and that linear extrapolation is feasible, making our BiO-TC-SCI approach as robust as any standard SCI approach. We also examined the ability of the BiO-TC-SCI algorithm to handle strongly correlated systems by breaking the CO double bond, and demonstrated that the size consistency property is fulfilled. Finally, we assessed the quality of the current correlation factor on a set of 14 small organic molecules. Our results show that even with such a simple correlation factor, the accuracy is typically better than a standard quadruple zeta calculation. It is important to note that thanks to the specific form of the correlation factor used here, no optimization of the latter was needed for molecular calculations. This work was performed using HPC resources from GENCI-TGCC (gen1738,gen12363) and from CALMIP (Toulouse) under allocation P22001, and was also supported by the European Centre of Excellence in Exascale Computing TREX — Targeting Real Chemical Accuracy at the Exascale. This project has received funding from the European Union's Horizon 2020 — Research and Innovation program — under grant agreement no. 952165. Emmanuel Giner would like to thank Julien Toulouse for fruitful discussions regarding the bi orthogonal basis sets.
http://arxiv.org/abs/2306.09555v1
20230615235639
Geometric-Based Pruning Rules For Change Point Detection in Multiple Independent Time Series
[ "Liudmila Pishchagina", "Guillem Rigaill", "Vincent Runge" ]
stat.ME
[ "stat.ME", "stat.CO", "stat.ML" ]
evry]Liudmila Pishchagina evry,IPS2]Guillem Rigaill evry]Vincent Runge [evry]Université Paris-Saclay, CNRS, Univ Evry, Laboratoire de Mathématiques et Modélisation d'Evry, 91037, Evry-Courcouronnes, France. [IPS2]Université Paris-Saclay, CNRS, INRAE, Univ Evry, Institute of Plant Sciences Paris-Saclay (IPS2), Orsay, France. We consider the problem of detecting multiple changes in multiple independent time series. The search for the best segmentation can be expressed as a minimization problem over a given cost function. We focus on dynamic programming algorithms that solve this problem exactly. When the number of changes is proportional to data length, an inequality-based pruning rule encoded in the PELT algorithm leads to a linear time complexity. Another type of pruning, called functional pruning, gives a close-to-linear time complexity whatever the number of changes, but only for the analysis of univariate time series. We propose a few extensions of functional pruning for multiple independent time series based on the use of simple geometric shapes (balls and hyperrectangles). We focus on the Gaussian case, but some of our rules can be easily extended to the exponential family. In a simulation study we compare the computational efficiency of different geometric-based pruning rules. We show that for small dimensions (2, 3, 4) some of them ran significantly faster than inequality-based approaches in particular when the underlying number of changes is small compared to the data length. Multivariate time series multiple change point detection dynamic programming functional pruning computational geometry § INTRODUCTION A National Research Council report <cit.> has identified change point detection as one of the “inferential giants” in massive data analysis. Detecting change points, either a posteriori or online, is important in areas as diverse as bioinformatics <cit.>, econometrics <cit.>, medicine <cit.>, climate and oceanography <cit.>, finance <cit.>, autonomous driving <cit.>, entertainment <cit.>, computer vision <cit.> or neuroscience <cit.>. The most common and prototypical change point detection problem is that of detecting changes in mean of a univariate Gaussian signal and a large number of approaches have been proposed to perform this task (see among many others <cit.> and the reviews <cit.>). Penalized cost methods Some of these methods optimize a penalized cost function (see for example <cit.>). These methods have good statistical guarantees <cit.> and have shown good performances in benchmark simulations <cit.> and on many applications <cit.>. From a computational perspective, they rely on dynamic programming algorithms that are at worst quadratic in the size of the data, n. However using inequality-based and functional pruning techniques <cit.> the average run times are typically much smaller allowing to process very large profiles (n> 10^5) in a matter of seconds or minutes. In detail, for one time series: * if the number of change points is proportional to n both PELT (inequality-based pruning) and FPOP (functional pruning) <cit.> are on average linear. * if the number of change points is fixed, FPOP is quasi-linear (on simulations) while PELT is quadratic <cit.>. Multivariate extensions In this paper we focus on the multivariate problem assuming the cost function or log-likelihood of a segment (denoted ) can be decomposed as a sum over all p dimensions. Informally that is (segment) = ∑_k=1^p(segment, time series k) . In this context, the PELT algorithm can easily be extended for multiple time series. However, as for the univariate case, it will be algorithmically efficient only if the number of change points non-neglectible compare to n. In this paper, we study the extension of functional pruning techniques (and more specifically FPOP) to the multivariate case. At each iteration, FPOP updates the set of parameter values for which a change position τ is optimal. As soon as this set is empty the change is pruned. For univariate time series, this set is a union of intervals in ℝ. For multi-parametric models, this set is equal to the intersection and difference of convex sets in ℝ^p <cit.>. It is typically non-convex, hard to update, and deciding whether it is empty or not is not straightforward. In this work, we present a new algorithm, called Geometric Functional Pruning Optimal Partitioning (GeomFPOP). The idea of our method consists in approximating the sets that are updated at each iteration of FPOP using simpler geometric shapes. Their simplicity of description and simple updating allow for a quick emptiness test. The paper has the following structure. In Section <ref> we introduce the penalized optimization problem for segmented multivariate time series. We then review the existing pruned dynamic programming methods for solving this problem. We define the geometric problem that occurs when using functional pruning. The new method, called GeomFPOP, is described in Section <ref> and based on approximating intersection and exclusion set operators. In Section <ref> we introduce two approximation types (sphere-like and rectangle-like) and define the approximation operators for each of them. We then compare in Section <ref> the empirical efficiency of GeomFPOP with PELT on simulated data. § FUNCTIONAL PRUNING FOR MULTIPLE TIME SERIES §.§ Model and Cost We consider the problem of change point detection in multiple time series of length n and dimension p. Our aim is to partition time into segments, such that in each segment the parameter associated to each time series is constant. For a time series y we write y = y_1:n=(y_1,…, y_n) ∈(ℝ^p)^n with y_i^k the k-th component of the p-dimensional point y_i∈ℝ^p in position i in vector y_1:n. We also use the notation y _i:j = (y_i,…, y_j) to denote points from index i to j. If we assume that there are M change points in a time series, this corresponds to time series splits into M+1 distinct segments. Each segment m ∈{1,…, M+1} is generated by independent random variables from a multivariate distribution with the segment-specific parameter θ_m = (θ_m^1,…, θ_m^p) ∈ℝ^p. A segmentation with M change points is defined by the vector of integers τ =(τ_0 = 0, τ_1,…,τ_M,τ_M+1=n). Segments are given by the sets of indices {τ_i+1,…, τ_i+1} with i in {0,1,…,M}. We define the set S_t of all possible change point locations related to the segmentation of data points between positions 1 to t as S_t = {τ = (τ_0,τ_1,…,τ_M, τ_M+1) ∈ℕ^M+2 | 0=τ_0 <τ_1 < … < τ_M < τ_M+1=t} . Usually the number of changes M is unknown, and has to be estimated. Many approaches to detecting change points define a cost function for segmentation using the opposite log-likelihood (times two). Here the opposite log-likelihood (times two) linked to data point y_j is given by function θ↦Ω(θ,y_j), where θ = (θ^1,…, θ^p) ∈^p. Over a segment from i to t, the parameter remains the same and the segment cost is given by (y_i:t) = min_θ∈ℝ^p∑_j=i^tΩ(θ, y_j) = min_θ∈ℝ^p∑_j=i^t(∑_k=1^pω(θ^k, y_j^k)) , with ω the atomic likelihood function associated with Ω for each univariate time series. This decomposition is made possible by the independence hypothesis between dimensions. Notice that function ω could have been dimension-dependent with a mixture of different distributions (Gauss, Poisson, negative binomial, etc.). In our study, we use the same data model for all dimensions. We consider a penalized version of the cost by a penalty β > 0, as the zero penalty case would lead to segmentation with n segments. Summing over all segments we end up with a penalty that is linear in the number of segments. Such choice is common in the literature (<cit.>,<cit.>) although some other penalties have been proposed (<cit.>,<cit.>,<cit.>). The optimal penalized cost associated with our segmentation problem is then defined by Q_n = min_τ∈ S_n∑_i=0^M{(y_(τ_i+1):τ_i+1)+β} . The optimal segmentation τ is obtained by the argminimum in Equation (<ref>). §.§ Functional Pruning Dynamic Programming Algorithm The idea of the Optimal Partitioning (OP) method <cit.> is to search for the last change point defining the last segment in data y_1:t at each iteration (with Q_0 = 0), which leads to the recursion: Q_t = min_i∈{0,…,t-1}(Q_i + (y_(i+1:t)+ β) . Functional description In the FPOP method we introduce a last segment parameter θ = (θ^1,…, θ^p) in ^p and define a functional cost θ↦ Q_t(θ) depending on θ, that takes the following form: Q_t(θ) = min_τ∈ S_t( ∑_i=0^M-1{(y_(τ_i+1):τ_i+1)+β} + ∑_j=τ_M+1^tΩ(θ, y_j) + β) . As explained in <cit.>, we can compute the function Q_t+1(·) based only on the knowledge of Q_t(·) as for each integer t from 0 to n-1. We have: Q_t+1(θ) = min{Q_t(θ),m_t +β} + Ω(θ, y_t+1) , for all θ∈ℝ^p, with m_t = min_θ Q_t(θ) and the initialization Q_0(θ) = 0, so that Q_1(θ) = Ω(θ,y_1). By looking closely at this relation, we see that each function Q_t is a piece-wise continuous function consisting of at most t different functions on ^p, denoted q^i_t: Q_t(θ) = min_i ∈{1,…,t }{q_t^i(θ)} , where the q_t^i functions are given by explicit formulas: q_t^i(θ) = m_i-1 + β + ∑_j = i^tΩ(θ,y_j) , θ∈^p , i = 1,…,t. and m_i-1 = min_θ∈^pQ_i-1(θ) = min_j ∈{ 1,…,i-1}{min_θ∈^pq_i-1^j(θ) }. It is important to notice that each q_t^i function is associated with the last change point i-1 and the last segment is given by indices from i to t. Consequently, the last change point at step t in y_1:t is denoted as τ̂_t ( τ̂_t ≤ t-1) and is given by τ̂_t = _i ∈{1,…,t}{min_θ∈ℝ^p q_t^i(θ)}-1. Backtracking Knowing the values of τ̂_t for all t=1, …, n, we can always restore the optimal segmentation at time n for y_1:n. This procedure is called backtracking. The vector cp(n) of ordered change points in the optimal segmentation of y_1:n is determined recursively by the relation cp(n) = (cp(τ̂_n), τ̂_n) with stopping rule cp(0)=∅. Parameter space description Applying functional pruning requires a precise analysis of the recursion (<ref>) that depends on the property of the cost function Ω. In what follows we consider three choices based on a Gaussian, Poisson, and negative binomial distribution for data generation. The exact formulas of these cost functions are given in <ref>. We denote the set of parameter values for which the function q^i_t(·) is optimal as: Z_t^i = {θ∈^p| Q_t(θ) = q_t^i(θ) }, i = 1,…,t. The key idea behind functional pruning is that the Z_t^i are nested (Z_t+1^i ⊂ Z_t^i) thus as soon as we can prove the emptiness of one set Z_t^i, we delete its associated q_t^i function and do not have to consider its minimum anymore at any further iteration (proof in next Section <ref>). In dimension p = 1 this is reasonably easy. In this case, the sets Z^i_t (i=1,…, t) are unions of intervals and an efficient functional pruning rule is possible by updating a list of these intervals for Q_t. This approach is implemented in FPOP <cit.>. In dimension p ≥ 2 it is not so easy anymore to keep track of the emptiness of the sets Z^i_t. We illustrate the dynamics of the Z^i_t sets in Figure <ref> in the bi-variate Gaussian case. Each color is associated with a set Z_t^i (corresponding to a possible change at i-1) for t equal 1 to 5. This plot shows in particular that sets Z_t^i can be non-convex. §.§ Geometric Formulation of Functional Pruning To build an efficient pruning strategy for dimension p≥2 we need to test the emptiness of the sets Z^i_t at each iteration. Note that to get Z_t^i we need to compare the functional cost q^i_t with any other functional cost q^j_t, j=1,…, t, j≠ i. This leads to the definition of the following sets. (S-type set) We define S-type set S^i_j using the function Ω as S_j^i = {θ∈ℝ^p | ∑_u=i+1^j Ω(θ, y_u) ≤ m_j-m_i} , when i < j and S_i^i = ^p. We denote the set of all possible S-type sets as 𝐒. To ease some of our calculations, we now introduce some additional notations. For θ = (θ^1,…,θ^p) in ℝ^p, 1 ≤ i < j≤ n we define p univariate functions θ^k ↦ s^k_ij(θ^k) associated to the k-th time series as s^k_ij(θ^k) = ∑_u = i+1^jω(θ^k,y_u^k), k = 1,…,p . We introduce a constant Δ_ij and a function θ↦ s_ij(θ): {Δ_ij = m_j - m_i , s_ij(θ) = ∑_k=1^p s^k_ij(θ^k)- Δ_ij , . where m_i and m_j are defined as in (<ref>). The sets S_j^i for i < j are also described by relation S_j^i = s_ij^-1 (-∞,0] . In Figure <ref> we present the level curves for three different parametric models given by s_ij^-1 ({w}) with w a real number. Each of these curves encloses an S-type set. At time t = 1,…, n we define the following sets associated to the last change point index i-1: 𝚙𝚊𝚜𝚝 𝚜𝚎𝚝 𝒫^i 𝒫^i ={S_i^u, u = 1,…,i-1} . 𝚏𝚞𝚝𝚞𝚛𝚎 𝚜𝚎𝚝 ℱ^i(t) ℱ^i(t) ={S_v^i, v = i,…,t} . We denote the cardinal of a set 𝒜 as |𝒜|. Using these two sets of sets, the Z^i_t have the following description. At iteration t, the functional cost Q_t(·) defines the subsets Z_t^i (i=1,…, t), each of them being the intersection of the sets in ℱ^i(t) minus the union of the sets in 𝒫^i. Z_t^i = (∩_S∈ℱ^i(t)S) ∖ (∪_S∈𝒫^iS) , i = 1,…,t. Based on the definition of the set Z_t^i, the proof is straightforward. Parameter value θ is in Z_t^i if and only if q_t^i(θ) ≤ q_t^u(θ) for all u i; these inequalities define the past set (when u < i) and the future set (when u>i). By convention we assume that, in case i = t, ∩_S∈ℱ^i(t)S = ℝ^p. The sequence ζ^i = (Z_t^i)_t≥ i is a nested sequence of sets. Indeed, Z_t+1^i is equal to Z_t^i with an additional intersection in the future set. Based on Corollary <ref>, as soon as we prove that the set Z_t^i, is empty, we delete its associated q_t^i function and, consequently, we can prune the change point i-1. In this context, functional and inequality-based pruning have a simple geometric interpretation. Functional pruning geometry The position i-1 is pruned at step t+1, in Q_t+1(·), if the intersection set of ∩_S∈ℱ^i(t)S is covered by the union set ∪_S∈𝒫^iS. Inequality-based pruning geometry The inequality-based pruning of PELT is equivalent to the geometric rule: position i-1 is pruned at step t+1 if the set S_t^i is empty. In that case, the intersection set ∩_S∈ℱ^i(t)S is empty, and therefore Z_t^i is also empty using equation (<ref>). This shows that if a change is pruned using inequality-based pruning it is also pruned using functional pruning. For the dimension p =1 this claim was theoretically proved in <cit.>. The construction of set Z^i_t using proposition <ref> is illustrated in Figure <ref> for a bi-variate independent Gaussian case: we have the intersection of three S-type sets and the subtraction of three S-type sets. § GEOMETRIC FUNCTIONAL PRUNING OPTIMAL PARTITIONING §.§ General Principle of GeomFPOP Rather than considering an exact representation of the Z^i_t, our idea is to consider a hopefully slightly larger set that is easier to update. To be specific, for each Z^i_t we introduce Z̃^i_t, called testing set, such that Z^i_t⊂Z̃^i_t. If at time t Z̃^i_t is empty thus is Z^i_t and thus change i-1 can be pruned. From <ref> we have that starting from Z = ℝ^p the set Z^i_t is obtained by successively applying two types of operations: intersection with an S-type set S (Z∩ S) or subtraction of an S-type set S (Z∖ S). Similarly, starting from Z̃ = ℝ^p we obtain Z̃^i_t by successively applying approximation of these intersection and subtraction operations. Intuitively, the complexity of the resulting algorithm is a combination of the efficiency of the pruning and the easiness of updating the testing set. A Generic Formulation of GeomFPOP In what follows we will generically describe GeomFPOP, that is, without specifying the precise structure of the testing set Z̃^i_t. We call 𝐙 the set of all possible Z̃^i_t and assume the existence of two operators ∩_Z̃ and ∖_Z̃. We have the following assumptions for these operators. The two operators and are such that: * the left input is a Z̃-type set (that is an element of 𝐙̃); * the right input is a S-type set; * the output is a Z̃-type set; * Z̃∩ S ⊂Z̃ S and Z̃∖ S ⊂Z̃ S. We give a proper description of two types of testing sets and their approximation operators in section <ref>. At each iteration t GeomFPOP will construct Z̃^i_t+1 from Z̃^i_t, 𝒫^i and, ℱ^i(t) iteratively using the two operators and . To be specific, we define S_j^F the j-th element of ℱ^i(t) and S_P^j the j-th element of 𝒫^i, we use the following iteration: { A_0 =Z̃^i_t , A_j = A_j-1 S_j^F , j = 1,… , |ℱ^i(t)| , B_0 =A_|ℱ^i(t)| , B_j = B_j-1 S_P^j , j = 1,… , |𝒫^i| , . and define Z̃^i_t+1 = B_|𝒫^i|. Using the fourth property of Definition <ref> and Proposition <ref>, we get that at any time of the algorithm Z̃^i_t contains Z^i_t. The pseudo-code of this procedure is described in Algorithm <ref>. The (𝒜) step in Algorithm <ref>, where 𝒜⊂𝐒, returns a subset of 𝒜 in 𝐒. By default, (𝒜):= 𝒜. We denote the set of candidate change points at time t as τ_t. Note that for any (i-1)∈τ_t the sum of |𝒫^i | and |ℱ^i(t)| is |τ_t|. With the default () procedure we do 𝒪(p|τ_t|) operations in Algorithm <ref>. By limiting the number of elements returned by () we can reduce the complexity. For example, if the operator 𝒜↦(𝒜), regardless of |𝒜|, always returns a subset of constant size, then the overall complexity of GeomFPOP is at worst equal to that of PELT with ∑_t=1^n𝒪(p|τ_t|) time complexity. Using this () procedure we can now informally describe the GeomFPOP algorithm. At each iteration the algorithm will * find the minimum value for Q_t, m_t; and the best position for last change point τ̂_t (note that this step is standard: as in the PELT algorithm we need to minimize the cost of the last segment defined in equation <ref>); * compute all sets Z̃_t^i using Z̃_t-1^i, 𝒫^i, and ℱ^i(t) with the () procedure. * Remove changes such that Z̃_t^i is empty. To simplify the pseudo-code of GeomFPOP, we also define the following operators: * (t) operator returns two values: the minimum value of Q_t, m_t, and the best position for last change point τ̂_t at time t (see Section <ref>); * (i,t) operator returns a pair of sets (ℱ^i(t), 𝒫^i) for change point candidate i-1 at time t; * (τ̂, n) operator returns the optimal segmentation for y_1:n. The pseudo-code of GeomFPOP is presented in Algorithm <ref>. § APPROXIMATION OPERATORS The choice of the geometric structure and the way it is constructed directly affects the computational cost of the algorithm. We consider two types of testing set Z̃∈𝐙̃, a S-type set S̃∈𝐒 (see Definition <ref>) and a hyperrectangle R̃∈𝐑 defined below. (Hyperrectangle) Given two vectors in ℝ^p, l̃ and r̃ we define the set R̃, called hyperrectangle, as: R̃ = [l̃_1,r̃_1]×…×[l̃_p,r̃_p] . We denote the set of all possible sets R̃ as 𝐑. To update the testing sets we need to give a strict definition of the operators ∩_Z̃ and ∖_Z̃ for each type of testing set. To facilitate the following discussion, we rename them. For the first type of geometric structure, we rename the testing set Z̃ as S̃, the operators and as and and Z̃-type approximation as S-type approximation. And, likewise, we rename the testing set Z̃ as R̃, the operators and as and and Z̃-type approximation as R-type approximation for the second type of geometric structure. §.§ S-type Approximation With this approach, our goal is to keep track of the fact that at time t = 1,…, n there is a pair of changes (u_1,u_2), with u_1 < i < u_2≤ t such that S^i_u_2⊂ S^u_1_i or there is a pair of changes (v_1,v_2), with i < v_1 < v_2≤ t such that S^i_v_1∩ S^i_v_2 is empty. If at time t at least one of these conditions is met, we can guarantee that the set S̃ is empty, otherwise, we propose to keep as the result of approximation the last future S-type set S^i_t, because it always includes the set Z^i_t. This allows us to quickly check and prove (if S̃ =∅) the emptiness of set Z^i_t. We consider two generic S-type sets, S and S̃ from 𝐒, described as in (<ref>) by the functions s and s̃: s(θ) = ∑_k=1^p s^k(θ^k)- Δ , s̃(θ) = ∑_k=1^p s̃^k(θ^k)- Δ̃ . For all S and S̃ in 𝐒 we define the operators and as: S̃ S = { ∅ , if S̃∩ S = ∅ , S̃ , otherwise . . S̃ S = { ∅ , if S̃⊂ S , S̃ , otherwise . . As a consequence, we only need an easy way to detect any of these two geometric configurations: S̃∩ S and S̃⊂ S. In the Gaussian case, the S-type sets are p-balls and an easy solution exists based on comparing radii (see <ref> for details). In the case of other models (as Poisson or negative binomial), intersection and inclusion tests can be performed based on a solution using separative hyperplanes and iterative algorithms for convex problems (see <ref>). We propose another type of testing set solving all types of models with the same method. §.§ R-type Approximation Here, we approximate the sets Z^i_t by hyperrectangles R̃^i_t ∈𝐑. A key insight of this approximation is that given a hyperrectangle R and an S-type set S we can efficiently (in 𝒪(p) using proposition <ref>) recover the best hyperrectangle approximation of R ∪ S and R ∖ S. Formally we define these operators as follows. (Hyperrectangles Operators , ) For all R, R̃∈𝐑 and S∈𝐒 we define the operators and as: R S = ∩_{R̃ | R ∩ S ⊂𝐑}R̃ , R S = ∩_{R̃ | R ∖ S ⊂𝐑}R̃ . We now explain how we compute these two operators. First, we note that they can be recovered by solving a 2p one-dimensional optimization problems. The k-th minimum coordinates l̃_k and maximum coordinates r̃_k of R̃ = R S (resp. R̃ = R S) is obtained as l̃_k or r̃_k = { min_θ_k ∈ or max_θ_k ∈θ_k , subject to ε s(θ) ≤ 0 , l_j ≤θ_j ≤ r_j , j = 1,…,p , . with ε = 1 (resp. ε = -1). To solve the previous problems (ε = 1 or -1), we define the following characteristic points. (Minimal, closest and farthest points) Let S ∈𝐒, described by function s(θ) = ∑_k=1^p s^k(θ^k)- Δ from the family of functions (<ref>), with θ∈^p. We define the minimal point 𝐜∈^p of S as: 𝐜 = {𝐜^k}_k=1,…,p, with 𝐜^k =θ^k ∈{ s^k(θ^k) } . Moreover, with R ∈𝐑 defined through vectors l,r ∈^p, we define two points of R, the closest point 𝐦∈ℝ^p and the farthest point 𝐌∈ℝ^p relative to S as 𝐦 ={𝐦^k}_k=1,…,p, with 𝐦^k = l^k ≤θ^k ≤ r^k{ s^k(θ^k)}, 𝐌 ={𝐌^k}_k=1,…,p, with 𝐌^k = l^k ≤θ^k ≤ r^k{s^k(θ^k)} . In the Gaussian case, S is a ball in ℝ^p and * 𝐜 is the center of the ball; * 𝐦 is the closest point to 𝐜 inside R; * 𝐌 is the farthest point to 𝐜 in R. Let R̃ = R S (resp. R S), with R ∈𝐑 and S ∈𝐒. We compute the boundaries (l̃, r̃) of R̃ using the following rule: (i) We define the point θ̃∈^p as the closest point 𝐦 (resp. farthest 𝐌). For all k = 1,… p we find the roots θ^k_1 and θ^k_2 of the one-variable (θ^k) equation s^k(θ^k)+∑_j≠ k s^j(θ̃^j) -Δ= 0 . If the roots are real-valued we consider that θ^k_1≤θ^k_2, otherwise we write [θ^k_1,θ^k_2] = ∅. (ii) We compute the boundary values l̃^k and r̃^k of R̃ as: * For R S (k = 1,…,p): [l̃^k,r̃^k] = [θ^k_1,θ^k_2] ∩[l^k, r^k] . * For R S (k = 1,…,p): [l̃^k,r̃^k] = { [l^k, r^k] ∖[θ^k_1,θ^k_2] , if [θ^k_1,θ^k_2] ⊄[l^k, r^k] , [l^k, r^k] , otherwise . . If there is a dimension k for which [l̃^k, r̃^k]=∅, then the set R̃ is empty. The proof of Proposition <ref> is presented in <ref>. § SIMULATION STUDY OF GEOMFPOP In this section, we study the efficiency of GeomFPOP using simulations of multivariate independent time series. For this, we implemented GeomFPOP (with S and R types) and PELT for the Multivariate Independent Gaussian Model in the R-package 'GeomFPOP' (<https://github.com/lpishchagina/GeomFPOP>) written in R/C++. By default, the value of penalty β for each simulation was defined by the Schwarz Information Criterion proposed in <cit.> (β = 2p logn). Overview of our simulations First, as a quality control we made sure that the output of PELT and GeomFPOP were identical on a number of simulated profiles. Second, we studied cases where the PELT approach is not efficient, that is when the data has no or few changes relative to n. Indeed, it was shown in <cit.> and <cit.> that the run time of PELT is close to 𝒪(n^2) in such cases. So we considered simulations of multivariate time series without change (only one segment). By these simulations we evaluated the pruning efficiency of GeomFPOP (using S and R types) for dimension 2≤ p≤ 10 (see Figure <ref> in Subsection <ref>). For small dimensions (2 ≤ p ≤ 4) we also evaluated the run time of GeomFPOP and PELT and compare them (see Figure <ref> in Subsection <ref>). In addition, we considered another approximation of the Z^i_t where we applied our and operators only for a randomly selected subset of the past and future balls. In practice, this strategy turned out to be faster computationally than the full/original GeomFPOP and PELT (see Figure <ref> in Subsection <ref>). For this strategy we also generated time series of a fixed size (10^6 data points) and varying number of segments and evaluated how the run time vary with the number of segments for small dimensions (2 ≤ p ≤ 4). Our empirical results confirmed that the GeomFPOP (R-type : 𝚛𝚊𝚗𝚍𝚘𝚖/𝚛𝚊𝚗𝚍𝚘𝚖) approach is computationally comparable to PELT when the number of changes is large (see Figure <ref> in Subsection <ref>). §.§ The Number of change point Candidates Stored Over Time We evaluate the functional pruning efficiency of the GeomFPOP method using simulations with 10^4 data points (without change, i.e. i.i.d 𝒩_p(0, I_p)). For such signals, PELT typically does not pruned (e.g. for t=10^4, p=2 it stores almost always t candidates). We report in Figure <ref> the percentage of candidates that are kept by GeomFPOP as a function of n, p and the type of pruning (R or S). Regardless of the type of approximation and contrary to PELT, we observe that there is some pruning. However when increasing the dimension p, the quality of the pruning decreases. Comparing Figure <ref> left and the right we see that for dimensions p=2 to p=5 R-type prunes more than the S-type, while for larger dimensions the S-type prunes more than the R-type. For example, for p = 2 at time t=10^4 by GeomFPOP (R-type) the number of candidates stored over t does not exceed 1% versus 3% by GeomFPOP (S-type). This intuitively makes sense. One the one hand the R-type approximation of a sphere gets worst with the dimension. On the other hand with R-type approximation every new approximation is included in the previous one. For small dimensions this memory effect outweight the roughness of the approximation. Based on these results we expect that R-type pruning GeomFPOP will be more efficient than S-type pruning for small dimensions. §.§ Empirical Time Complexity of GeomFPOP We studied the run time of GeomFPOP (S and R-type) and compared it to PELT for small dimensions (p=2, 3, 4). Run times were limited to three minutes and were recorded for simulations (without change, i.e i.i.d 𝒩_p(0, I_p)). The results are presented in Figure <ref>. We observe that GeomFPOP is faster than PELT only for p=2. For p=3 run times are comparable and for p=4 GeomFPOP is slower. This lead us to consider a randomized version of GeomFPOP (see next subsection). §.§ Empirical Time Complexity of a randomized GeomFPOP R-type GeomFPOP is designed in such a way that at each iteration we need to consider all past and future spheres of change i. In practice, it is often sufficient to consider just a few of them to get an empty set. Having this in mind, we propose a further approximation of the Z^i_t where we apply our and operators only for a randomly selected subset of the past and future sets. In detail, we propose to redefine the output of the () function in Algorithm <ref> on any sets 𝒫^i and ℱ^i(t) as: * ( 𝒫^i) returns one random set from 𝒫^i. * ( ℱ^i(t)) returns the last set S^i_t and one random set from ℱ^i(t). Thus, we consider the following geometric update rule: * (𝚛𝚊𝚗𝚍𝚘𝚖 / 𝚛𝚊𝚗𝚍𝚘𝚖) At time t we update hyperrectangle: * by only two intersection operations: one with the last S-type set S^i_t from ℱ^i(t), and one with a random S-type set from ℱ^i(t); * by only one exclusion operation with a random S-type set from 𝒫^i. In this approach at time t we do no more than three operations to update the testing set Z̃^i_t for each (i-1) ∈τ_t. According to the Remark <ref>, even with large values of p, the overall complexity of GeomFPOP should not be worse than that of PELT. We investigated other randomized strategies but this simple one was sufficient to significantly improve run times. The run time of our optimization approach and PELT in dimension (p= 2, …, 10, 100) are presented in Figure <ref>. As in Subsection <ref>, run times were limited to three minutes and were recorded for simulations of length ranging from 2^10 to 2^23 data points (without change, i.e i.i.d 𝒩_p(0, I_p)). Although the (𝚛𝚊𝚗𝚍𝚘𝚖/𝚛𝚊𝚗𝚍𝚘𝚖) approach reduces the quality of pruning (see <ref>), it gives a significant gain in run time compared to PELT in small dimensions. To be specific, with a run time of five minutes GeomFPOP, on average, processes a time series with a length of about 8× 10^6, 10^6 and 2,5× 10^5 data points in the dimensions p=2,3 and 4, respectively. At the same time, PELT manages to process time series with a length of at most 6,5×10^4 data points in these dimensions. §.§.§ Empirical complexity of the algorithm as a function of p We also evaluate the slope coefficient α of the run time curve of GeomFPOP with random sampling of the past and future candidates for all considered dimensions. In Figure <ref> we can see that already for p≥ 7 α is close to 2. §.§.§ Run time as a function of the number of segments For small dimensions (2≤ p ≤ 4) we also generated time series with 10^6 data points with increasing number of segments. We have considered the following number of segments: (1,2,5) × 10^i(for i=0,…,3) and 10^4. The mean was equal to 1 for even segments, and 0 for odd segments. In Figure <ref> we can see the run time dependence of the (𝚛𝚊𝚗𝚍𝚘𝚖/𝚛𝚊𝚗𝚍𝚘𝚖) approach of GeomFPOP (R-type) and PELT on the number of segments for this type of time series. Interestingly, the run time of GeomFPOP (𝚛𝚊𝚗𝚍𝚘𝚖/𝚛𝚊𝚗𝚍𝚘𝚖) is comparable to PELT even when the number of segment is large. For smaller number of segments (as already observed) GeomFPOP (𝚛𝚊𝚗𝚍𝚘𝚖/𝚛𝚊𝚗𝚍𝚘𝚖) is an order of magnitude faster. § ACKNOWLEDGMENT We thank Paul Fearnhead for fruitful discussions. § EXAMPLES OF LIKELIHOOD-BASED COST FUNCTIONS We define a cost function for segmentation as in (<ref>) by the function Ω(·,·) (the opposite log-likelihood (times two)). In Table <ref> is the expression of this function linked to data point y_i = (y_i^1,…, y_i^p)∈ℝ^p for three examples of Parametric Multivariate Models. § ARRANGEMENT OF TWO P-BALLS We define two p-balls, S and S' in ^p using their centers c, c' ∈^p and radius R, R' ∈^+ as S = { x ∈^p,|| x - c|| ^2 ≤ R^2} and S' = { x ∈^p,|| x - c'|| ^2 ≤ R'^2}, where || x - c|| ^2 = ∑_k=1^p (x^k - c^k)^2, with x = (x^1,..., x^p) ∈^p, is the Euclidean norm. The distance between centers c and c' is defined as d(c, c') = √(|| c - c' ||^2). We have the following simple results: S ∩ S' = ∅ d(c,c') > R + R' , S ⊂ S' or S' ⊂ S d(c,c') ≤ |R-R'| . § INTERSECTION AND INCLUSION TESTS For any S^i_j ∈𝐒 its associated function s can be redefine after normalization by constant j-i+1 as: s(θ) = a(θ) + ⟨ b,θ⟩ + c, with a(·) is some convex function depending on θ, b={b^k}_k =1,…, p∈ℝ^p and c ∈ℝ. For example, in the Gaussian case, the elements have the following form: a: θ↦θ^2 , b^k = 2Y̅_i:j^k , c =Y̅^2_i:j - Δ_ij , where Y̅_i:j^k = 1/j-i+1∑_u=i+1^j y_u^k and Y̅^2_i:j = 1/j-i+1∑_u=i+1^j ∑_k=1^p (y_u^k)^2. For all θ∈ℝ^p and S_1, S_2 ∈𝐒 with their associated functions, s_1 and s_2, we define a function h_12 and a hyperplane H_12 as: h_12(θ):= s_2(θ) - s_1(θ) , H_12 := {θ∈ℝ^p | h_12(θ) = 0 } . We denote by H_12^+ := {θ∈ℝ^p |h_12(θ)> 0} and H_12^- := {θ∈ℝ^p |h_12(θ)< 0} the positive and negative half-spaces of H_12, respectively. We call 𝐇 the set of hyperplanes. For all S ∈𝐒 and H ∈𝐇 we introduce a operator. The operator is such that: * the left input is an S-type set S; * the right input is a hyperplane H; * the output is the half-spaces of H, such that S lies in those half-spaces. We define the output of (S,H) by the following rule: * We find two points, θ_1, θ_2 ∈ℝ^p, as: {θ_1 = _θ∈ S s(θ), θ_2 = {_θ∈ S h(θ), if θ_1 ∈ H^+, _θ∈ S h(θ), if θ_1 ∈ H^-. . . * We have: (S,H) = {{H^+}, if θ_1, θ_2 ∈ H^+, {H^-}, if θ_1, θ_2 ∈ H^-, {H^+, H^-}, otherwise. . S_1 ⊂ H_12^-⇔∂ S_1 ⊂ H_12^-, where ∂(·) denote the frontier operator. The proof of Lemma <ref> follows from the convexity of S_1. S_1 ⊂ S_2 (resp. S_2 ⊂ S_1) ⇔ S_1, S_2 ⊂ H_12^- (resp. S_1, S_2 ⊂ H_12^+). We have the hypothesis ℋ_0:{ S_1 ⊂ S_2}, then ∀θ∈∂ S_1 { s_1(θ) = 0 , s_2(θ) ≤ 0 , . [by Definition <ref>] [by ℋ_0] ⇒θ∈ H_12^- ⇒∂ S_1 ⊂ H_12^-. Thus, according to Lemma <ref>, S_1 ⊂ H_12^-. We have now the hypothesis ℋ_0: {S_1, S_2 ⊂ H_12^-}, then ∀θ∈ S_1 { s_1(θ) ≤ 0, h_12(θ) < 0, . [by Definition <ref>] [by ℋ_0, Definition <ref>] ⇒θ∈ S_2 ⇒ S_1 ⊂ S_2. Similarly, it is easy to show that S_2 ⊂ S_1⇔ S_1, S_2 ⊂ H_12^+. S_1∩ S_2 = ∅⇔ H_12 is a separating hyperplane of S_1 and S_2. We have the hypothesis ℋ_0:{S_1 ⊂  H_12^+, S_2 ⊂  H_12^-}. Thus, H_12 is a separating hyperplane of S_1 and S_2 then, according to its definition, S_1∩ S_2 = ∅. We have now the hypothesis ℋ_0:{S_1∩ S_2 = ∅} then ∀θ∈ S_1 { s_1(θ) ≤ 0 , s_2(θ) > 0 , . [by Definition <ref>] [by ℋ_0, Definition <ref>] ⇒θ∈ H_12^+. ∀θ∈ S_2 { s_1(θ) > 0 , s_2(θ) ≤ 0 , . [by ℋ_0, Definition <ref>] [by Definition <ref>] ⇒θ∈ H_12^-. Consequently, H_12 is a separating hyperplane of S_1 and S_2. To detect set inclusion S_1 ⊂ S_2 and emptiness of set intersection S_1 ∩ S_2, it is necessary: * build the hyperplane H_12; * apply the operator for couples (S_1,H_12) and (S_2,H_12) to know in which half-space(s) S_1 and S_2 are located; * check the conditions in the Lemmas <ref> and <ref>. § PROOF OF PROPOSITION <REF> For the proof of Proposition <ref> we need the following remark. With set S∈𝐒 the maximum and minimum values for each coordinate in S are obtained on the axis going through minimal point 𝐜. Let 𝐜 = {𝐜^k}_k= 1,…,p is the minimal point of S, defined as in (<ref>). In the intersection case, we consider solving the optimization problem (<ref>) for the boundaries l̃^k and r̃^k, removing constraint l^k ≤θ^k ≤ r^k. If R intersects S, the optimal solution θ^k belongs to the boundary of S due to our simple (axis-aligned rectangular) inequality constraints and we get s^k(θ^k) = -∑_ j≠ ks^j(θ^j)+ Δ . We are looking for minimum and maximum values in θ^k for this equation with constraints l^j≤θ^j ≤ r^j (j k). Using the convexity of s^k and s^j, we need to maximize the quantity in the right-hand side. Thus, the solution θ̃^j for each θ^j is the minimal value of ∑_j≠ k s^j(θ^j) under constraint l^j≤θ^j ≤ r^j and the result can only be l^j, r^j or 𝐜^j. This decomposition in smaller problems is made possible thanks to our problem setting with independence. Looking at all coordinates at the same time, the values for θ̃∈^p corresponds to the closest point 𝐦 = {𝐦^k}_k=1,…,p. Having found θ^k_1 and θ^k_2 using θ̃ the result in (<ref>) is obvious considering current boundaries l^k and r^k. In exclusion case, we remove from R the biggest possible rectangle included into S ∩{l^j≤θ^j ≤ r^j , j k}, which correspond to minimizing the right hand side of (<ref>), that is maximizing ∑_j≠ k s^j(θ^j) under constraint l^j≤θ^j ≤ r^j (j k). In that case, the values for θ̃ correspond to the greatest value returned by ∑_j≠ k s^j(θ^j) on interval boundaries. With convex functions s^j, it corresponds to the farthest point 𝐌 = {𝐌^k}_k=1,…, p. § OPTIMIZATION STRATEGIES FOR GEOMFPOP(R-TYPE) In GeomFPOP(R-type) at each iteration, we need to consider all past and future spheres of change i. As it was said in Section <ref>, in practice it is often sufficient to consider just a few of them to get an empty set. Thus, we propose to limit the number of operations no more than two: * 𝚕𝚊𝚜𝚝. At time t we update hyperrectangle by only one operation, this is an intersection with the last S-type set S^i_t from ℱ^i(t). * 𝚛𝚊𝚗𝚍𝚘𝚖. At time t we update the hyperrectangle by only two operations. First, this is an intersection with the last S-type set S^i_t from ℱ^i(t), and second, this is an intersection with other random S-type set from ℱ^i(t). The number of operations we limit no more than one: * 𝚎𝚖𝚙𝚝𝚢. At time t we do not perform operations. * 𝚛𝚊𝚗𝚍𝚘𝚖. At time t we update hyperrectangle by only one operation: exclusion with a random S-type set from 𝒫^i. According to these notations, the approach presented in the original GeomFPOP (R-type) has the form (𝚊𝚕𝚕 / 𝚊𝚕𝚕). We show the impact of introduced limits on the number of change point candidates retained over time and evaluate their run times. The results are presented in Figures <ref> and <ref>. Even though the (𝚛𝚊𝚗𝚍𝚘𝚖/𝚛𝚊𝚗𝚍𝚘𝚖) approach reduces the quality of pruning in dimensions p=2,3 and 4, it gives a significant gain in the run time compared to the original GeomFPOP (R-type) and is at least comparable to the (𝚕𝚊𝚜𝚝/𝚛𝚊𝚗𝚍𝚘𝚖) approach.
http://arxiv.org/abs/2306.12455v2
20230621052232
Gravitational lensing in a space-time with cosmic strings within the Eddington-inspired Born-Infeld gravity
[ "Faizuddin Ahmed" ]
gr-qc
[ "gr-qc", "hep-th" ]
Quantum spin liquids on the diamond lattice Yasir Iqbal July 31, 2023 =========================================== § 1. INTRODUCTION Gravitational lensing refers to the phenomenon observed when a massive object or a concentration of matter, like a galaxy cluster, creates a gravitational field that produces significant curvature in space-time. Consequently, the path of light originating from a distant source deviates from a straight line and becomes curved as it travels towards the observer. This bending of light is a consequence of the gravitational field's effect on the trajectory of photons, altering the apparent position and characteristics of the source as observed from Earth. Following Eddington's groundbreaking observation of light bending, numerous mathematical investigations have been conducted on gravitational lensing, encompassing both weak and strong limits. These studies have delved into the lensing phenomena occurring in various spacetimes, including those associated with black holes, wormholes, and naked singularities. Through rigorous mathematical analyses, researchers have sought to unravel the intricate mechanisms and effects of gravitational lensing in these exotic space-time configurations. Eddington-Inspired Born-Infeld gravity (EiBI) <cit.> is a gravity model that provides an alternative description of gravity in the high-energy regime, particularly in strong gravitational fields. Unlike traditional theories, EiBI avoids geodesic singularities even in classical treatments, while maintaining asymptotic compatibility with General Relativity. The initial proposal of modified gravity theories, including EiBI, was developed within the framework of (pseudo)-Riemannian geometry <cit.>. The theoretical structure of EiBI draws inspiration from the nonlinear electrodynamics formulated by Born and Infeld <cit.>. Gravitational lensing is a crucial and significant area of study in cosmology and gravitation. It occurs in both weak field scenarios, where light rays pass far from the source, and strong field situations, where light rays pass very close to massive objects. Extensive investigations have been conducted on gravitational lensing, exploring a diverse range of space-time configurations. These studies include gravitational lensing by charged black holes in the context of string theory <cit.>, brane-world black holes <cit.>, Reissner-Nordstrom black holes <cit.>, naked singularities in space-times <cit.>, Schwarzschild black holes <cit.>, Kerr-Randers optical geometry <cit.>, rotating global monopole space-times <cit.>, a general asymptotically flat, static, and spherically symmetric space-times <cit.>. Further investigations have explored gravitational lensing in cosmic string space-time under Lorentz symmetry breaking effects <cit.>, with the inclusion of cosmological constant effects in rotating cosmic string space-time <cit.>, Kerr-Newman-Kasuya space-time <cit.>, and global monopoles within the framework of Eddington-inspired Born-Infeld theory <cit.>. Additional studies have focused on gravitational lensing in Kerr-MOG black holes <cit.>, Simpson-Visser black-bounce space-time <cit.>, rotating regular black holes <cit.>, black-bounce space-time <cit.>, stationary axisymmetric space-time <cit.>, and black holes <cit.>. Moreover, gravitational lensing has been examined in topological charged space-time within the framework of Eddington-inspired Born-Infeld gravity <cit.>, massive gravity <cit.>, spherically symmetric and static space-time <cit.>, Rindler modified Schwarzschild black holes <cit.>, phantom black holes <cit.>, and topologically charged Eddington-inspired Born-Infeld space-time <cit.>. Furthermore, gravitational lensing has been the subject of extensive investigation in various other geometric backgrounds, expanding our understanding of this phenomenon. These studies encompass gravitational lensing in wormholes <cit.>, Damour-Solodukhin wormholes <cit.>, rotating wormholes <cit.>, charged wormholes within the framework of Einstein-Maxwell-dilaton theory <cit.>, Morris-Thorne wormholes <cit.>, traversable Lorentzian wormholes <cit.>, Ellis wormholes <cit.>, inclusion of wave effect on gravitational lenses by the Ellis wormhole <cit.>, microlensing by the Ellis wormhole <cit.>, asymptotically conical Morris-Thorne wormholes <cit.>, and topologically charged Ellis-Bronnikov-type wormholes <cit.>. The lensing properties of symmetric and asymmetric wormholes <cit.>, phantom wormholes using the Gauss-Bonnet theorem <cit.>, exponential wormhole spacetimes <cit.>, and massless wormholes in massive gravity <cit.> have also been studied. These extensive investigations have deepened our understanding of gravitational lensing phenomena in diverse and exotic space-time scenarios. The line-element describing cosmic string or wormhole space-time in EiBI gravity background in the spherical coordinates (t, r, θ, ϕ) is given by (c=1=ħ=G) ds^2=-dt^2+dr^2/(1+ϵ/r^2)+r^2 (dθ^2+α^2 sin^2 θ dϕ^2), where ϵ, the parameter associated with the nonlinearity of the EiBI gravity, and α <1 is the cosmic string parameter related with linear mass density of the string. The coordinates are in the ranges 0 ≤ r < ∞, 0 < θ <π, and 0 ≤ϕ < 2 π. One can see from the above line-element that for ϵ=0, the metric becomes a cosmic string space-time in the spherical system given by ds^2=-dt^2+dr^2+r^2 (dθ^2+α^2 sin^2 θ dϕ^2) Refs. <cit.>. It is worth mentioning here that the gravitational lensing effects of vacuum strings was investigated in Ref. <cit.>. Similarly, for ϵ=-b^2<0 and α→ 1, we have an Ellis-Bronnikov-type wormhole space-time Refs. <cit.> in which gravitational lensing has widely been investigated Refs.<cit.>. Finally, for ϵ=-b^2<0, one will have Morris-Thorne-type wormhole with cosmic strings which has recently been discussed in Ref. <cit.>. For ϵ <0, one can do comparison of this line-element (<ref>) with the general form of Morris-Thorne wormhole metric <cit.>, that is, ds^2=-e^2 Φ(r) dt^2+dr^2/(1-A(r)/r)+r^2 dΩ^2 and show that the redshift function is null, Φ(r)=0, and the shape function is given by A(r)=-ϵ/r. This EiBI gravity background space-time with cosmic strings is asymptotically flat since A(r)/r→ 0 at r →∞. The introduction of a cosmic string into a space-time brings about significant alterations to its geometry and curvature characteristics. In this study, our objective is to explore the influence of a cosmic string on the deflection angle of photon light and the corresponding effective potential within the framework of the space-time described by equation (<ref>). We derive the deflection angle for photon light in this particular space-time and carefully analyze the obtained results. Furthermore, we compare our findings with existing results in the literature, allowing for a comprehensive evaluation and a deeper understanding of the impact of cosmic strings on light deflection. § 2. GRAVITATIONAL LENSING IN EIBI GRAVITY BACKGROUND WITH COSMIC STRINGS We begin this section with the Lagrangian of a system defined by <cit.> ℒ=1/2 g_μν (dx^μ/dτ) (dx^ν/dτ), where τ is the affine parameter of the curve, and g_μν is the metric tensor. Using the line-element (<ref>) in θ=π/2 hyper-surface, we find ℒ=1/2 [-ṫ^2+ṙ^2/(1+ϵ/r^2)+α^2 r^2 ϕ̇^2]. There are two constant of motion given by ṫ=E, L=α^2 r^2 ϕ̇⇒ϕ̇=L/α^2 r^2, where E is the conserved energy parameter, and L the conserved angular momentum. With these, the Lagrangian (<ref>) for light-like or time-like geodesics becomes (dr/dτ)^2=(1+ϵ/r^2) (ε+E^2-L^2/α^2 r^2), where ε=0 for null geodesics and -1 for time-like geodesics. One can see from above that the turning point r=r_0 in the geometry under consideration occurs for dr/dτ=0, i. e., r_0=L/α E for null geodesics that depends on the cosmic string parameter. It is better to mention here that for the wormhole case, ϵ<0, the solution has a minimum radius given by r=r_0=√(|ϵ|), thus, photons with sufficient energy pass to the other side of the wormhole region. The effective potential expression of the system given by V_eff=(-ε+L^2/α^2 r^2) depends on the cosmic string parameter. In figs. 1, we plot the effective potential for null geodesics with α=1, 0.5 keeping fixed L, and in fig. 2, for different values of L with fixed α=0.5. Similarly, we plot the effective potential for time-like geodesics with α=1, 0.5 keeping fixed L in fig. 3, and for different values of L with fixed α=0.5 in fig. 4. Now, using Eqs. (<ref>) and (<ref>), we obtain dϕ/dr=ϕ̇/ṙ=β/α^2 √((r^2+ϵ)(r^2-β^2/α^2)), where β=L/E is the impact parameter characterizing a particular null geodesic with the conserved energy parameter E and the angular momentum L and therefore, r_0=β/α. By symmetry, the contributions to Δϕ before and after the turning point are equal Hence, we can obtain the deflection angle <cit.> δϕ=Δϕ-π, where Δϕ = 2 β/α^2 ∫^∞_r_0 dr/√((r^2+ϵ)(r^2-β^2/α^2)) = 2 β/α^2 ∫^∞_β/α dr/√((r^2+ϵ)(r^2-β^2/α^2)) Defining z=β/α r and g=-α^2 ϵ/β^2 into the above integral results Δϕ=2/α K(g), where we have defined K(g)=∫^1_0 dz/√((1-z^2)(1-(√(g))^2 z^2)). The above expression is valid both for the WH case ϵ <0 and with cosmic string case ϵ>0. In case of cosmic string in EiBI gravity background with positive parameter ϵ>0, we can write g=-γ, where γ=α^2 ϵ/β^2>0. One can show that K(g)=K(-γ)=1/√(1+γ) K(γ/1+γ). The deflection angle of photon light therefore is given by δϕ=2/α K(g)-π, where δϕ={[ 2/α K(-α^2 ϵ/β^2)-π ϵ <0; 2/α √(1+α^2 ϵ/β^2) K(α^2 ϵ/β^2/1+α^2 ϵ/β^2)-π ϵ>0. ]. The deflection angle of photon light is significantly influenced by the presence of the cosmic string parameter α. Figure 5 displays the deflection angle without cosmic strings, where α is set to 1. In Figure 6, we observe the deflection angle for a fixed value of the cosmic string parameter, specifically α = 0.75. It is noteworthy that in the case of a wormhole (g > 0), the deflection angle of light diverges as the turning point approaches the throat radius. Conversely, in the case of a wormhole with a cosmic string (g < 0), the deflection angle remains finite throughout. Figure 7 presents the deflection angle for various values of the cosmic string parameter, specifically α = 0.4, 0.6, 0.8. We observe that as the cosmic string parameter increases, the deflection angle decreases. This trend highlights the influence of the cosmic string parameter on the deflection behavior of photon light. Since the cosmic string parameter lies in the interval 0< α <1, thus, at the weak field limit, β >ϵ, where g<<1 or g → 0. In this limit, one can express the complete elliptic integral K(g) as K(g) = π/2 ∑^∞_n=0 ((2n!)/2^2 n (n!)^2)^2 (√(g))^2 n = π/2 [1-1/4 α^2 ϵ/β^2+9/64 α^4 ϵ^2/β^4-𝒪(ϵ/β^2)^3]. Thus, the deflection angle of photon light in this limit becomes δ ϕ = π (1/α-1)-π α/4 (ϵ/β^2)+9 π α^3/64 (ϵ/β^2)^2-𝒪(ϵ/β^2)^3 = π (1/α-1)-π α/4 (ϵ/β^2)+𝒪(ϵ/β^2)^2. For ϵ=-b^2<0, the space-time (<ref>) under consideration becomes Morris-Throne-type wormhole with cosmic strings, and the deflection of null geodesics has studied recently in Ref. <cit.>. For α→ 1, one will obtain the following expression from (<ref>) given by δ ϕ=-π/4 (ϵ/β^2)+9 π/64 (ϵ/β^2)^2-𝒪(ϵ/β^2)^3. The given expression represents the deflection angle for null or light-like geodesics in a wormhole space-time within the framework of Eddington-inspired Born-Infeld (EiBI) gravity background, in the absence of cosmic strings, α→ 1. If we substitute the parameter ϵ=-b^2 into the above deflection angle (<ref>), we will get back the same results obtained in an Ellis wormhole background in the weak field limit in Refs. <cit.>. Furthermore, upon examining Equation (<ref>), we can observe that the first term in the expression corresponds to the deflection caused solely by the cosmic string effect. On the other hand, the second term in the expression represents the combined contribution arising from both the EiBI gravity background and the presence of cosmic strings. In the reference <cit.>, the authors investigated the phenomenon of gravitational lensing induced by a topologically charged EiBI monopole or wormhole space-time. They examined this effect in both the weak and strong field limits. In the case of the weak field limit, the deflection angle can be expressed as follows: δϕ=π (1/α-1)-π/4 α (ϵ/β^2)+𝒪(ϵ/β^2)^2. Upon examining the expressions for the deflection angle equations (<ref>) and (<ref>), it becomes evident that the angle of deflection experienced by photon rays in the presence of cosmic string effects is greater when compared to the corresponding result (<ref>) obtained under the influence of topological charged space-time in the EiBI gravity background. In essence, the deflection caused by cosmic strings exhibits a more pronounced effect on photon trajectories within the specific framework of EiBI gravity. § 3. CONCLUSIONS Numerous studies have examined the phenomenon of photon deflection in diverse curved spacetime backgrounds, including those generated by black holes, wormholes, and topological defects. These investigations have involved the thorough analysis of the impact of curvature on the angle of deflection experienced by null geodesics. This research focuses on examining the deflection of photon rays within the Eddington-inspired Born-Infeld (EiBI) gravity wormhole background, which incorporates cosmic strings. By employing the Lagrangian method, we have derived a one-dimensional energy expression. Our findings demonstrate that the effective potential of the system, applicable to both null and time-like geodesics, is affected by the cosmic string parameter denoted as α. In our study, we have produced several graphs illustrating the effective potential for both null and time-like geodesics within the Eddington-inspired Born-Infeld (EiBI) gravity wormhole background, considering different values of α and L. These graphs are presented in figures 1 to 4. Furthermore, we have analytically derived the deflection angle for null geodesics and obtained the expression (<ref>) under the weak field limit. This expression provides insights into the relationship between the deflection angle and relevant parameters in the EiBI gravity wormhole scenario. By analyzing the expression (<ref>), it becomes evident that the angle of deflection for photon rays is significantly influenced by the presence of the cosmic string parameter, denoted as α. This parameter introduces a shift in the resulting deflection angle, altering its magnitude. To illustrate this influence, we have generated graphs (figs. 5-7) showcasing the variation of the deflection angle for different values of α. These graphs provide visual evidence of how the cosmic string parameter impacts the deflection of light in our study. Moreover, it is noteworthy that the angle of deflection observed in the null geodesics is greater when compared to the previously established result for a topologically charged EiBI gravity background. This indicates that the deflection of null geodesics under the influence of the specific conditions (cosmic strings) considered in this study exhibits a more substantial deviation from the known outcomes in the context of topologically charged EiBI gravity. § DATA AVAILABILITY STATEMENT No data generated or analyzed in this study. § CONFLICT OF INTEREST No conflict of interests in this manuscript. § FUNDING STATEMENT No fund has received for this manuscript. ff1 J. B. Jimenez, L. Heisenberg, G. J. Olmo, and D. Rubiera-Garcia, Phys. Rep. 727, 1 (2018). ff2 S. Deser and G. Gibbons, Class. Quant. Grav. 15, L35 (1998). ff3 M. Born and L. Infeld, Proc. R. Soc. A 144, 425 (1934). aa1 A. Bhadra, Phys. Rev. D 67, 103009 (2003). aa2 R. Whisker, Phys. Rev. D 71, 064004 (2005). aa4 E. F. Eiroa, G. E. Romero and D. F. Torres, Phys. Rev. D 66, 024010 (2002). aa6 F. Ahmed, Adv. High Energy Phys. 2017, Article ID 3587018 (2017). aa7 F. Ahmed, Prog. Theor. Exp. Phys. 2017, 083E03 (2017). aa8 F. Ahmed and F. Rahaman, Adv. High Energy Phys. 2018, Article ID 7839619 (2018). aa9 F. Ahmed and F. Rahaman, Eur. Phys. J A 54, 52 (2018). aa10 K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 62, 084003 (2000). aa13 M. C. Werner, Gen Rel. Grav. 44, 3047 (2012). aa14 K. Jusufi, M. C. Werner, A. Banerjee and A. Övgün, Phys. Rev. D 95, 104012 (2017). aa24 T. Ono, A. Ishihara and H. Asada, Phys. Rev. D 99, 124030 (2019). aa15 N. Tsukamoto, Phys. Rev. D 95, 064035 (2017). aa19 K. Jusufi, I. Sakalli and A. Övgün, Phys. Rev. D 96, 024040 (2017). aa22 K. Jusufi and A. Övgün,, Phys. Rev. D 97, 064030 (2018). aa23 A. Övgün, I. Sakalli and J. Saavedra, JCAP 10 (2018) 041. aa32 I. N. Huda and H. S. Ramadhan, J. Phys.: Conf. Ser. 1816, 012021 (2021). aa34 A. Övgün, I. Sakalli and J. Saavedra, Ann. Phys. 411 (2019) 167978. aa35 N. Tsukamoto, Phys. Rev. D 103, 024033 (2021). aa37 K. Jusufi, A. Övgün, J. Saavedra, Y. Vasquez and P. A. Gonzalez, Phys. Rev. D 97, 124024 (2018). JRN J. R. Nascimento, A. Yu. Petrov, P. J. Porfirio, A. R. Soares, Phys. Rev. D 102, 044021 (2020). aa38 T. Ono, A. Ishihara and H. Asada, Phys. Rev. D 96, 104037 (2017). aa39 V. Bozza, Gen. Relativ. Gravit. 42, 2269 (2010). ARS C. Furtado, J. R. Nascimento, A. Yu. Petrov, P. J. Porfírio, A. R. Soares, Phys. Rev. D 103, 044047 (2021). aa44 K. Jusufi, N. Sarkar, F. Rahaman, A. Banerjee, and S. Hansraj, Eur. Phys. J. C 78, 349 (2018). aa26 V. Perlick, Phys. Rev. D 69, 064017 (2004). bb1 I. Sakalli and A. Övgün, EPL 118, 60006 (2017). bb2 A. Övgün, G. Gyulchev and K. Jusufi, Ann. Phys. (N. Y.) 406, 152 (2019). aa33 A. R. Soares, R. L. L. Vitoria and C. F. S. Pereira, arXiv: 2305.11105 [gr-qc]. aa3 K. K. Nandi, Y. Z. Zhang and A. V. Zakharov, Phys. Rev. D 74, 024020 (2006). RS R. Shaikh, P. Banerjee, S. Paul and T. Sarkar, JCAP 07 (2019) 028. RS2 R. Shaikh, P. Banerjee, S. Paul and T. Sarkar, Phys. Lett. B 789, 270 (2019). aa36 A. Övgün, Phys. Rev. D 98, 044033 (2018). aa16 K. Jusufi and A. Övgün, Phys. Rev. D 97, 024042 (2018). aa21 T. Ono, A. Ishihara and H. Asada, Phys. Rev. D 98, 044047 (2018). aa17 K. Jusufi, A. Övgün and A. Banerjee, Phys. Rev. D 96, 084036 (2017). aa25 T. Müller, Phys. Rev. D 77, 044043 (2008). aa27 K. K. Nandi, Y.-Z. Zhang and A. V. Zakharov, Phys. Rev. D 74, 024020 (2006). aa18 N. Tsukamoto, Phys. Rev. D 94, 124001 (2016). aa28 K. Nakajima and H. Asada, Phys. Rev. D 85, 107501 (2012). aa20 N. Tsukamoto and Y. Gong, Phys. Rev. D 97, 084051 (2018). aa40 G. W. Gibbons and M. Vyska, Class. Quantum Grav. 29, 065016 (2012). aa411 N. Tsukamoto, T. Harada, and K. Yajima, Phys. Rev. D 86, 104062 (2012). aa43 N. Tsukamoto and T. Harada, Phys. Rev. D 95, 024030 (2017). aa42 C.-M. Yoo, T. Harada and N. Tsukamoto, Phys. Rev. D 87, 084045 (2013). FB F. Abe, Astrophys. J. 725, 787 (2010). FB2 Y. Toki, T. Kitamura, H. Asada and F. Abe, Astrophys. J. 740, 121 (2011). aa29 K. Jusufi, Phys. Rev. D 98, 044016 (2018). aa31 F. Ahmed, EPL 142, 39002 (2023). aa30 H. Aounallah, A. R. Soares and R. L. L. Vitoria , Eur. Phys. J. C 80, 447 (2020). aa41 K. A. Bronnikov and K. A. Baleevskikh, Gravit. Cosmol. 25, 44 (2019). bb4 T. Manna, F. Rahaman and T. Chowdhury, New Astronomy 102, 102040 (2023). AV A. Vilenkin and E. P. S. Shellard, Strings and Other Topological Defects, Cambridge University Press, Cambridge (1994). GAM G. de A Marques and V. B. Bezerra, Class. Quantum Grav. 19, 985 (2002). GAM2 G. de A Marques and V. B. Bezerra, Phys. Rev. D 66, 105011 (2002). GAM3 G. de A. Marques, V. B. Bezerra and S. G. Fernandes, Phys. Lett. A 341, 39 (2005). JRG J. R. Gott III, Astrophy. J. 288, 422 (1985). HGE H. G. Ellis, J. Math. Phys. 14, 104 (1973). KAB K. A. Bronnikov, Acta Phys. Pol. B 4, 251 (1973). MT M. Morris, K. Thorne and U. Yurtsever, Phys. Rev. Lett. 61, 1446 (1988). MT2 M. S. Morris and K. S. Thorne, Amer. J. Phys. 56, 395 (1988).
http://arxiv.org/abs/2306.02637v1
20230605071237
Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments
[ "Taeho Kang", "Christian Wallraven" ]
cs.HC
[ "cs.HC" ]
Arxiv version, July 2023 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments Taeho Kang, Christian Wallraven July 31, 2023 ======================================================================================================= Virtual Reality (VR) is seeing increased adoption across many fields. The field of experimental cognitive science is also testing utilization of the technology combined with physiological measures such as electroencephalography (EEG) and eye tracking. Quantitative measures of human behavior and cognition process, however, are sensitive to minuscule time resolutions that are often overlooked in the scope of consumer-level VR hardware and software stacks. In this preliminary study, we implement VR testing environments in two prominent 3D Virtual Reality frameworks (Unity and Unreal Engine) to measure latency values for stimulus onset execution code to Head-Mount Display (HMD) pixel change, as well as the latency between human behavioral response input to its registration in the engine environment under a typical cognitive experiment hardware setup. We find that whereas the specifics of the latency may further be influenced by different hardware and software setups, the variations in consumer hardware is apparent regardless and report detailed statistics on these latencies. Such consideration should be taken into account when designing VR-based cognitive experiments that measure human behavior. Virtual reality, VR, EEG, cognitive experiments, human behavioral, behavioral measurements, eye-tracking, latency, response time § INTRODUCTION The idea of utilizing naturalistic stimuli in cognitive experiments is increasingly gaining traction, and its importance has been recognized in an increasing number of studies <cit.>. 3D environments such as Virtual and Mixed Reality (VR/MR) can provide an excellent platform for implementing experimental paradigms where immersive, interactive and naturalistic stimuli presentation is desired <cit.>. Especially for VR, behavioral and cognitive investigative experiments performed in virtual reality have the advantage of being able to control and manipulate environmental variables that in real-life settings would be nearly impossible to control  <cit.>. Possibly due to this, virtual reality has seen increased utilization in the area of behavioral and cognitive investigations, from simple behavioral experiments <cit.>, to neuroimaging studies <cit.>, to even timing sensitive studies involving physiological signal measurements such as EEG <cit.>. Due to the nature of cognitive processes of interest, behavioral and cognitive studies investigating timing-critical brain processes have been historically sensitive to latency in experimental hardware and software <cit.>. It has been suggested, however, that specialized behavioral input devices may not be as crucial for even time-critical experiments, as the variability from human behavior itself is generally larger in scale than the input lag occurring from individual hardware devices <cit.>. For ease of experimental equipment acquisition that may in turn be relevant to the easy replication of studies, usage of adequately performant consumer hardware may be preferable to limited costly specialized equipment for behavioral input. Nonetheless, especially in studies measuring time-sensitive behaviors, there is importance in measuring expected latency in hardware and software setups used for experimental paradigms. Wimmer et al. <cit.> used opto-couplers to measure latency of 36 different serial input devices connected to a Raspberry Pi device and formed probability distribution models for each of the devices, and reported different input latency distributions per device; suggesting the need for measuring input latency levels in interactive experimental setups that make use of serial device based user input device. Furthermore, due to higher graphical computation requirements than conventional displays arising from not only generally higher refresh rates but also other factors such as needing to render twice for stereo vision, the current state of VR hardware suffers from latency greater than of conventional user interface devices <cit.>. A final point of consideration in this context concerns the use of higher-level APIs for generating three-dimensional, interactive environments that afford realistic levels of sensory realism and interactivity. While it is possible to create well-controlled low-level stimuli with relative ease in computer graphics languages, the amount of work necessary to create environments which, for example, contain objects that interact with each other in a physically-realistic fashion from scratch is beyond the capabilities of standard cognitive and behavioral research labs. For this reason, many researchers have increasingly turned to 3D game engines for creating such environments. One drawback of this development is that these engines offer only a reduced degree of control over their timing internals given that much of the behind-the-scenes calculation remains hidden from the API user (examples include the calculation of graphic primitives, the determination of collisions in physics-aware simulations, etc.). This raises the question of how much timing accuracy and precision is possible in game engine simulation programming environments. In this context, Wiesing et al. <cit.> have measured stimuli duration and onset measurements in Unreal Engine with a dedicated response pad and reported increased average latency, compared to dedicated cognitive experiment software such as Psychopy and Psychtoolbox. While Unreal Engine as a serious 3D engine has been used for VR based behavioral experiments <cit.>, possibly due to the relative ease of implementation in comparison to the former, Unity Engine has been seeing increased applications in cognitive experiments <cit.>. While they are suited for similar purposes, due to differences in implementation detail Unity and Unreal Engine often exhibit different behaviors even when the same effect is intended, especially in frame and I/O related latency performances <cit.>. In light of these considerations, to ultimately implement and execute experiments investigating brain processes in a naturalistic VR environment, we deem it worth investigating the expected latency values for hardware and software setups that would (commonly) be used in VR-based behavioral experiments. In this study, we aim to achieve this by utilizing a measuring apparatus with oscilloscopes, as well as a bare-bone experimental paradigm implemented in two widely-used VR capable 3D engines, Unity Engine and Unreal Engine. In the bare bone paradigm, we create stimulus onsets that send trigger codes to the measuring apparatus before the actual displaying of stimuli is performed in the Head Mounted Device (HMD), and measure latency between the the onset code and the actual pixel change in the HMD. Furthermore, we measure the latency between a physical input action on consumer-level user interface hardware (keyboard) that can be used for behavioral response, and the registration of the input in the 3D engines. Lastly, we measure the latency between the physical input action and the pixel change from the resultant feedback code execution. § MATERIALS AND METHODS §.§ Experiment Design We were interested in the measurement of the following events: 1) latency between stimulus onset code execution in the 3D engine and the actual HMD pixel changes as the stimulus was presented (Stim2Disp), 2) latency between participant behavioral response by a keypress and the code execution performed immediately upon the register of the key event in the 3D engine (Key2Led), and 3) latency between participant behavioral response by keypress and the pixel change in HMD caused by the 3D engine code that presents a visual feedback stimulus upon the key event register (Key2Disp). To measure these events, we implemented experimental paradigms capable of measuring these events in both Unity Engine and Unreal Engine, as can be seen in Figure <ref>. We decided to measure both 3D Engines as their implementations differ, as well as the scripting implementation language for user code: Unity utilizes C# for this, whereas Unreal uses C++. Both engines have seen usage in cognitive experimental designs in 3D environments (see Introduction). The 3D VR experiment environment consists of a basic 3D spherical object that can move around based on the user's input. Upon a stimulus onset code or recognition of a participant's behavioral response by a keyboard, a chess-board grid of black and white covers the entire screen for one frame, then with the colors inverted for another frame. The experiment code embodies a bare-bone basic form of cognitive experimental paradigm using 3D Engines, and we send programmatic triggers upon stimulus onset and behavioral input registration, as one would for experiments requiring high temporal resolution like in EEG or other cognitive experiments that involve some form of time series physiological measurement. We measure the latency of the above 3 scenarios (Stim2Disp, Key2Disp, Key2Led) by sending programmatic markers to Arduino upon stimulus onset code execution, and behavioral response registration in the 3D engine, which triggers an LED. Furthermore, we use pressure sensors and photodiodes connected to the Arduino board to have quantifiable measures of when participant behavior and stimuli display happen in the real world. §.§.§ Unity-specific setup For Unity, we implemented the paradigm in Unity Engine version 2019.4.20f. Following Unity Engine's manual on order of execution for event functions (see<https://docs.unity3d.com/Manual/ExecutionOrder.html>), the FixedUpdate() function handles ticks on computations focused on the physics engine calculations, and may render more than once per rendered frame depending on the computation load and settings, while the Update() function ticks per every frame that is rendered (after FixedUpdate() calls are complete). The events are called serially, and between the Update() call and the actual rendering calls of the scene on display there are several other calls; as such, in the interest of sending the trigger for stimulus onset in experimental behavior measurement database as close to the onset of the actual stimulus in the display, it is preferable to send the marker code sometime after the Update() call but before the actual display rendering process. We achieve this by calling a coroutine that waits execution of sending stimulus onset triggers until the rendering computation is complete, but before the displaying is performed: Furthermore, as the FixedUpdate() call executes at the beginning of the game tick and executes at a higher rate, it is preferable to process keyboard input events (i.e. behavioral response) in there. For Key2Led and Key2Disp events, the function handling keyboard input events can also call for the co-routines to send markers subsequently: §.§.§ Unreal-specific setup For the Unreal Engine, we implemented the paradigm in version 4.26. Unreal Engine logic ticks are separated in tick groups (PrePhysics, DuringPhysics, PostPhysics, and PostUpdateWork) that are serially run as per the documentation (<https://docs.unrealengine.com/5.1/en-US/actor-ticking-in-unreal-engine/>). Processing of user input is handled in the PrePhysics segment of the ticking, and as such binding the input of specific keys to a method that sends trigger is sufficient. By calling stimulus onset triggers in a code that is executed on the PostPhysics or PostUpdateWork segment, we can also set it to be as close to the timing of the actual display as possible. To ensure tweak Unreal Engine for optimal performance, several project settings were changed in addition: First, in the Rendering->VR settings, Instanced Stereo was enabled while mobile HDR was disabled, as per recommendations by  <cit.>. Second, the following console variables were were changed: R.GTSyncType to 1, R.Vsync to 0, and R.OneFrameThreadLag to 0. R.GTSyncType determines which thread in game processes sync to: 0 if they sync with the rendering thread, if 1 they sync to the RHI(render hardware interface=d3dx or opengl) thread. As per the Unreal documentation syncing to the RHI thread helps with input latency, so we set it as 1. VSync renders the frames at the pace at which the display device is capable of, but it often leads to more dropped frames when enabled than otherwise <cit.>. When OneFrameThreadLag is enabled, the graphics drivers keep the game thread from processing further than one frame worth of computations than what is currently being displayed. We deemed this undesirable as our purpose was to minimize lags stemming from computations not being ahead enough, along with minimization of the input latency. §.§ Measuring apparatus To measure timings of 1) behavioral response onset, 2) stimulus onset code execution, 3) feedback code execution in response to behavior, and 4) pixel changes on the HMD as precisely as possible, a circuit apparatus using that can bee seen in Figure <ref> was implemented. The inspiration for the circuit board was based on a schematic from class material in Aachen university's system design course <cit.>. Specific components of the circuitry included an Arduino Uno Rev.3, a BPW-34 photosensitive diode developed by Vishay Semiconductors, a pressure sensor FSR402 developed by Interlink Electronics. For registering the behavior response, a Wooting One keyboard developed by Wooting was used. For running the 3D Engine based experimental paradigms, a Windows-10 based computer running on AMD's 5900x CPU with Nvidia RTX 3090Ti was used. As the actual display of the VR environment, an Oculus DK2 headset from Meta Inc. was used, in which the photodiode was attached to next to the display. An USB-based oscilloscope developed by Pico Technology (Picoscope series 2205A) with two probes was used for measuring the changes in voltage. The oscilloscope's sampling frequency was set too 240kHz. The ground clamps of both probes were connected to the ground pin cable of the Arduino board. As the Oculus DK2's refresh rate is at 75Hz, we band-pass filtered the probe connected to the photodiode to [60 80]Hz. Sample collection length per trial was set to 200ms, with 20ms pre-trigger and 180ms post-trigger. For the Stim2Disp measurements, the first probe was clamped to the LED diode that would be toggled on and off by the stimulus control code in the 3D engine, while the second probe was clamped to the cable connected to the HMD-attached photodiode. The scope data collection trigger was set to a rising threshold of 1.5V for Unity and 115mV for Unreal with a hysteresis of 5.87% on the first probe. All scope measure trigger thresholds were set manually set after trial and error for catching the events of interest, and the difference in threshold was made due to the probe attenuator settings changed to different levels between the two sets of measurements, the difference in thresholds, however, did not interfere with the trigger adequately capturing the point where the event was occurring. This was verified after measurements by visually inspecting the probe waveforms for peaks from LED and keypress actions. In Key2Disp and Key2LED measurements, the first probe was clamped to the cable attached to the pressure sensor attached to the keyboard. Here again, due to selecting different probe attenuator settings, the probe trigger threshold was set to 450mV rise with 2.44% hysteresis for Unreal, and a 4.7V rise for unity. The second probe was connected to the LED in Key2LED measurements, and to the photodiode in Key2Disp measurements. The Arduino would be connected to the experiment PC via USB, through which LED trigger communications would be sent from the 3D engines via serial communication. For each latency event of interest, we made at least 300 repetitions of the measurement in order to collect sufficient sample size. §.§ Data processing Data preprocessing and analysis were performed with Matlab 2021b by Mathworks Inc.. As the scope sampling rate was rather high considering our time epochs of interest, data was first downsampled to 20kHz. As the data epochs were temporally zero-centered to the triggering event of the first probe, the timing of the events of interest on the second probe (photodiode voltage change, LED power on) had to be found by peak detection, as the onset of events of interest would result in significant changes in the probe voltage. In photodiode measurements this meant voltage troughs that were far greater than the baseline pixels (as the black and white grids would trigger a greater change in the luminosity of the display, leading to greater voltage changes). For all measurements on the second probe, as we were looking for the latency for the onset of the event of interest, finding the timing of only the first significant peak detected was necessary. Finding the position of the peaks was performed with the findpeak() function provided in the Signal Processing Toolbox of Matlab. The resultant peaks were plotted and manually inspected for enough number of trials (>100) in each condition to ensure the function was performing as desired. Once the timing of the events of interest were found, we calculated latency for the three events of interest. § RESULTS From stimulus onset marker code execution to the actual onset of the chess-board grid on the HMD pixels, on Unity Engine there was an average latency of 10.777ms (SD 0.672), while on Unreal Engine an average latency of 21.059ms (SD 0.671) was observed. From behavioral keypress onset detection to chess board grid onset on the HMD, an average latency of 47.026ms (SD 6.156) was observed on Unity while 46.682ms (SD 4.499) was observed on Unreal Engine. In a separate session measuring latency between physical keypress detection and LED onset upon keypress register in the 3D engine, we found an average latency of 36.948ms (SD 4.911) on Unity and 25.161ms (SD 5.087) on Unreal. Table <ref> shows the summarized results. Figure <ref> (Stim2Disp),  <ref> (Key2Disp), and  <ref> (Key2Led) each shows probe measurements for all individual trials superimposed on the top plot (as well as the detected response peaks as black scatterplots), and the averaged out measurements on the lower plot. Two-sample T-tests between the two 3D engines for the Stim2Disp condition showed a significant difference in latency between Unity and Unreal (t_df=641=60.537, p<1e^-100, SD=0.665). Similarly, a significant difference was observed between Unity and Unreal's latecny in the Key2Led condition (t_df´735=31.900, p<1e^-100, SD=4.991). No significant difference was found in latency between Unity and Unreal for the Key2Disp condition (t_df=713=0.833, p=0.405, SD=5.484). § DISCUSSION This study aimed to make precise measurements of latencies that may occur during time-critical cognitive behavioral experiments in 3D engine-based virtual reality environments. To achieve this, we implemented a bare-bone 3D environment in both Unity and Unreal Engine, two prominent 3D engines that are used to develop Virtual reality and 3D scenarios for general purposes. We implemented latency measurement in three different scenarios: a scenario in which a marker event for stimuli presentation was sent and rendered on the display, a scenario in which a physical key press by a participant happened and registered in the 3D engine, and a scenario where a key press happened and the resultant feedback occurred on the display. We used oscilloscope probes combined with photodiodes on display events, serial-communication triggered LEDs for software events, and pressure sensors for physical keypress events in order to make precise timing measurements on when each of these events were occurring. We first discuss the difference in average latency values between Key2Disp and Key2Led conditions: although measurements for the two conditions were made separately (as the Key2Led involved sending a LED trigger to arduino upon 3D engine recognition of the key event), considering our experiments were designed to be as basic as possible in terms of code implementation (as well as the sufficient sample size for each condition), at a glance one would expect the sum of Stim2Disp and Key2Led conditions to match up or be somewhat less than the average Key2Disp values. In Unity, the sum of the average of two conditions exceed the Key2Disp slightly. We believe this is understandable considering the communication time between PC software and the actual Arduino interface itself. In a previous study by Schubert et al. <cit.>, downstream communication from an experimental computer to an Arduino was measured to be an average of 1.251ms with a low standard deviation. Considering the difference between the sum of mean Key2Led and Stim2Disp, and the Key2Disp condition itself, the difference appears to be enough to explain the somewhat larger mean latency in the combined latency. §.§ Registering behavioral response without Unity or Unreal with LSL From our current set of results, it appears the largest issue in maintaining a reliable latency for cognitive experiment occurs from registering behavioral responses from the user I/O device. As can be observed from results in the Key2Led condition, in both 3D engines this latency is the largest (and the most variable) in registering the physical response event to the software stack. It is possible that the nature of serial port devices contribute a large part in this variance: parallel-port connected devices have been known to be favored over serial port connections for participant I/O in timing critical experimental design  <cit.>. However serial port device technology has come a long way, and it has been suggested that the imprecision arising from user input devices may not be as critical as believed previously <cit.>. In older studies comparing serial and PS/2 devices, serial input devices were reported to have a much higher input latency with high variance  <cit.>. In modern devices however, the latency gap between serial port based devices and parallel port devices may have become less considerable: response pad hardware specifically used for cognitive experiments such as Cedrus pads use serial USB connections. Furthermore, the choice of keyboard hardware that was used in our experiment was a mechanical keyboard that uses optic-based switches for faster input recognition on the hardware's part, along with high polling rates over 100Hz. The software stack also plays as much as a large part in the mean and variation of the latency as the hardware stack does. It has been reported that the experimental software framework as well as the operating system can contribute to differently distributed latency and missed frame counts  <cit.>. We look into lowering the input variance and latency further by utilizing software stack independent from 3D engines in this section. In light of these considerations, we performed another set of measurements, but this time using a software outside of 3D engines for key event recognition. Lab Streaming Layer (LSL) <cit.> is a C++ based system for synchronizing experimental data from multiple sources through a unified clock (<https://github.com/sccn/labstreaminglayer> for more info). LSL has been used for cognitive studies involving physiological signal measurements in which timing was critical <cit.>. It supports language bindings in multiple programming languages, as well as writing functions for adding custom data source to the data streaming system. We modified a C++ callback code available in the LSL Github Repository to catch certain key events and send Arduino LED events similar to Key2Led condition, but bypassing 3D engines for the key event recognition and getting them directly from the OS level: The results from the set of measurements using LSL and a C++ callback function for key events can be seen in Figure <ref>. With a mean latency of 9.950ms (SD 1.700) from physical key press event to the Arduino sensor trigger, we are seeing much lower average latency levels that compare to older PS/2 devices, as well as more stable variations in the latency. By logging keypress events or participant behavioral input through separately run programs such as LSL, we believe some of the issues regarding input lag variation in experiments using graphics and compute-heavy 3D engines can be alleviated somewhat. §.§ Stimulus presentation code to auditory stimulus onset delay in Unity In interactive experiments utilizing VR technology, especially in those that aim to create naturalistic experimental environment with immersion, it is often worth considering a multisensory presentation of stimuli. The addition of auditory components to the visual stimuli presentation would create a much more immersive VR simulation. And like visual stimuli, presentation of auditory stimuli needs to be considerably precise in timing as well for event related design experiments measuring physiological and behavioral response to stimuli. We deemed it was also worth investigating the latency of auditory stimuli onset code and the physical propagation of the stimuli sound. In the case of Unity, one can utilize the base sound functionality provided by the engine, or, use 3rd-party sound engines that are compatible with the 3D engine such as FMOD (<https://www.fmod.com/>). While the default sound library from Unity does not provide a lot of tweaking options to optimize for performance, FMOD allows manually setting sound playback buffer sizes. For this study, we used a soundfile from  <cit.> as the stimulus to playback, either using Unity's default sound library, or using FMOD with a buffer size of either 512 or 1024. A line-out cable (3.5mm M/M) was plugged into the speaker jack of the experimental computer, with the other end being connected to the oscilloscope probe as can be seen in Figure <ref>. We did similar measurements like in Stim2Disp conditon, measuring the latency between stimuli onset code and the actual propagation of the sound in the sound cable. We report the results in Figure <ref>. While using FMOD with a buffer size of 512 yielded the best results, we observe that latency for auditory stimuli presentation is much worse compared to visual stimuli both in mean accuracy and in variation. Based on this observation, we believe caution is warranted when using auditory stimuli, especially when in the absence of a concurrent visual stimuli. §.§ Future works As Wimmer et al. <cit.> reported, specific latency measures for experimental setup are strongly dependent on the specifics of the hardware and software one acquires for the experiment. Considering the continuously developing landscape in VR and its related hardware/software stack, simply measuring the latency of each setup is not only insufficient, but it is a fruitless endeavor long-term. Instead, it would be more prudent to develop a framework capable of measuring delays for configurable setup on the go: this is our most immediate next step. Furthermore, the purpose of establishing latency value distributions are to ultimately utilize them in developments of VR-based behavioral experiments in event-related designs to collect synchronized time-dependent behavioral and physiological data; for our purposes of investigating underlying brain processes, we are especially interested in utilizing these findings to create latency-optimized VR EEG experiments in immersive, naturalistic 3D. § ACKNOWLEDGMENTS This study was supported by the National Research Foundation of Korea under project BK21 FOUR and grants NRF-2022R1A2C2092118, NRF-2022R1H1A2092007, NRF-2019R1A2C2007612, as well as by Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning; No. 2019-0-00079, Department of Artificial Intelligence, Korea University; No. 2021-0-02068, Artificial Intelligence Innovation Hub). IEEEtran
http://arxiv.org/abs/2306.11503v1
20230609155510
The Age of Synthetic Realities: Challenges and Opportunities
[ "João Phillipe Cardenuto", "Jing Yang", "Rafael Padilha", "Renjie Wan", "Daniel Moreira", "Haoliang Li", "Shiqi Wang", "Fernanda Andaló", "Sébastien Marcel", "Anderson Rocha" ]
cs.CY
[ "cs.CY", "cs.AI", "cs.LG" ]
The Age of Synthetic Realities]The Age of Synthetic Realities: Challenges and Opportunities Cardenuto, et al.]João Phillipe Cardenuto^1, Jing Yang^1, Rafael Padilha^1, Renjie Wan^2, Daniel Moreira^3, Haoliang Li^4, Shiqi Wang^5, Fernanda Andaló^1, Sébastien Marcel^6,7 and Anderson Rocha^1 1Artificial Intelligence Lab., <Recod.ai>, Institute of Computing, University of Campinas, Campinas, SP, Brazil. 2Department of Computer Science, Hong Kong Baptist University, Hong Kong. 3Department of Computer Science, Loyola University Chicago, Chicago, IL, USA. 4Department of Electrical Engineering, City University of Hong Kong, Hong Kong. 5Department of Computer Science, City University of Hong Kong, Hong Kong. 6Idiap Research Institute, Martigny, Switzerland. 7University of Lausanne, Lausanne, Switzerland. João Phillipe Cardenuto [email protected] Synthetic realities are digital creations or augmentations that are contextually generated through the use of Artificial Intelligence (AI) methods, leveraging extensive amounts of data to construct new narratives or realities, regardless of the intent to deceive. In this paper, we delve into the concept of synthetic realities and their implications for Digital Forensics and society at large within the rapidly advancing field of AI. We highlight the crucial need for the development of forensic techniques capable of identifying harmful synthetic creations and distinguishing them from reality. This is especially important in scenarios involving the creation and dissemination of fake news, disinformation, and misinformation. Our focus extends to various forms of media, such as images, videos, audio, and text, as we examine how synthetic realities are crafted and explore approaches to detecting these malicious creations. Additionally, we shed light on the key research challenges that lie ahead in this area. This study is of paramount importance due to the rapid progress of AI generative techniques and their impact on the fundamental principles of Forensic Science. [ [ ===== § INTRODUCTION In the last decade, there has been a growing expectation that Artificial Intelligence (AI) companies and researchers would dedicate their efforts to integrating humans into the digital realm or the so-called Metaverse. However, while technologies like Augmented Reality (AR) and Virtual Reality (VR) have been topics of discussion for a long time, it is only recently that significant technological advancements have made it possible to materialize such systems. This version of a synthetic reality has been technically discussed at least since the 90s <cit.>, and this was the main vision of where AI and related technologies would take us. However, what came as a less expected outcome is that these very technologies would inundate our physical world with content and creations, profoundly transforming our interactions with the virtual realm and reshaping how we engage with one another. This more complex notion of synthetic reality has been a topic of discussion by the greatest minds of our time <cit.>. On the one hand, some hold a positive perspective, recognizing the immense advantages it can bring in domains such as automation, healthcare, and innovation. On the other, a group expresses concerns about the potential perils posed by AI, such as the generation of propaganda and untruth. They even advocate for temporary halts in AI experimentation within laboratories <cit.> to allow time for legal and ethical considerations to align with the pace of progress. We can adopt a more pragmatic perspective and carefully embrace this emerging paradigm. This entails rapidly adapting ourselves and our societies and understanding and revitalizing our scientific endeavors. Hence, we redefine the term “synthetic realities” herein as any contextual digital creation or augmentation enabled by artificial intelligence methods. These techniques/models draw upon massive amounts of data leading to a new “reality” or narrative regardless of its intention to deceive the individual interacting with it. When the synthetic creation harms individuals, minorities, human rights, or the rule of law, it is paramount to devise forensic techniques to pinpoint such creations and separate what is real from what is synthetic. As an example, consider the creation of a fake news piece. Someone could fabricate a story from scratch using a chatbot, illustrate it with a synthetic image and a video, and then broadcast it to the world as if it were real via social media. Consequently, Forensic Science has been continually adapting to these evolving circumstances. Rooted in the foundational principle that “every contact leaves a trace”, coined by researcher Edmond Locard <cit.>, Forensic Science asserts that every interaction between individuals, objects, and places leaves behind a trail of evidence. While this concept was initially centered around physical traces like fingerprints, footprints, and blood, it has recently expanded to encompass digital counterparts such as photos, audio, video, and social media posts <cit.>. The revolving question around digital evidence is: “Are they fake or not?”, as manipulating these multimedia assets can be quickly done with simple and inexpensive tools. Moreover, credible manipulations can be used to fabricate more believable multimedia stories. Research in Multimedia Forensics has yielded important approaches to detecting altered media <cit.>. More specifically, progress has been made in analyzing digital media (image, video, and audio), identifying manipulations <cit.>, tracing provenance <cit.>, and establishing links with other digital evidence <cit.>. However, the emerging concept of synthetic reality paints an even more unsettling scenario: around 90% of the digital content will be synthetic in the upcoming years, meaning that almost all content will be generated synthetically by definition <cit.>. The distinction between what is genuine and what is fake takes on a new meaning. This phenomenon becomes evident in various domains, including movies <cit.>, social media <cit.>, marketing <cit.>, and education <cit.>. Notably, companies are now exploring the adoption of AI-generated models to promote their products <cit.> or employing AI to simulate eye contact in video conferencing software, enhancing the sense of connection during remote interactions <cit.>. Schools worldwide are banning chatbots and the use of generative AI on their networks in response to concerns about students submitting unauthentic and potentially plagiarized work <cit.>. The examples are many when thinking about how AI is shaping our reality. Therefore, Forensic Science has to adapt yet again to this new reality. To expose synthetic content and tell apart malicious from harmless manipulations or even positive creations, there is one new key element: context. Contextual information can be leveraged to understand the semantics behind media objects. This can empower fact-checking solutions that mitigate the effect of falsified news, misinformation, and false political propaganda. To that end, and following a cognitive science interpretation, we can view digital objects through three perspectives: technological artifacts, sources of information, and platforms to convey ideas. Traditional forensic techniques thus far have primarily focused on the first perspective. Analysts examined an asset as a digital signal and aimed to detect any possible artifact related to pixel-level or physical-level inconsistencies (e.g., concerning compression, sensor noise, illumination, shadows) to establish its authenticity. The second perspective pertains to standard fact-checking procedures going beyond multimedia forensics. When treating an object as a source of information, it is essential to identify (or know) the acquisition device and to identify the time and location where it was produced as basic steps towards a fact-checking effort. The third perspective considers a digital object as a platform for conveying ideas. Determining the intentional goal of the asset leads to answering the question of why something happened. The answer to this question and the result of forensic analyses from the other two perspectives can reveal the ultimate goal of the falsified information. For example, it can help identify if there is an ongoing campaign to bias public opinion, influence the mood of a social group, or even incite a group to articulate plans for violent acts. The latest advancements in AI have compelled forensic techniques to navigate the intricacies between these perspectives. Taking this into consideration, we focus on studying synthetic realities in different forms of media: images, videos, audio, and text. In the remainder of this paper, we discuss how synthetic media is created, considering each of these modalities, and the implications for Digital Forensics when such creations intend to harm third parties in different ways. We explore how to detect such malicious creations and pinpoint key research challenges that lie ahead. This is particularly significant due to the remarkable progress of AI generative techniques in generating realistic content and effectively concealing the typical artifacts left behind during the creation process. Each new generative method aims at creating ever-more-believable realities, thus directly colliding with Locard's principle, the cornerstone of forensics. § SYNTHETIC IMAGES The proliferation of sophisticated synthetic and manipulated media has captured people's attention worldwide. For forensic researchers, this surge in synthetic reality evokes the daunting scenario reminiscent of the early days of Digital Forensics, where image editing was recognized as a powerful tool capable of altering reality <cit.>. While traditional manual image-editing software like Photoshop and GIMP continue to improve, they are being overshadowed (or enhanced, in the particular case of Photoshop <cit.>) by the emergence of powerful AI-based generative techniques. Today, it has become remarkably effortless to transform a simple concept or idea into a realistic image, with no requirement for drawing or painting skills to produce stunning, high-quality results. Amid this rapidly evolving landscape lies generative models. Generative images have outstanding widespread applications in entertainment, as reviving legendary artists <cit.>; healthcare, as aiding surgeons in developing new abilities <cit.>; and accessible tools, as serving people with disabilities <cit.>. However, many other harmful uses have been reported, such as nonconsensual DeepFake porn <cit.>, misinformation generation <cit.>, and sophisticated types of scams <cit.>. Scientific integrity researchers are also concerned that such technology would create fraudulent synthetic images in science <cit.> and the medical area <cit.>, in particular. Given the alarming potential for misuse of generative models in creating synthetic realities, this section delves into state-of-the-art generative models and the detection of AI-generated images, providing perspectives on the future of synthetic images. §.§ Image Synthesis The accelerated research on generative approaches in recent years gave birth to a plethora of AI models and techniques that are developed and open-sourced to the community. They are coupled with easy-to-use environments and applications <cit.>, allowing users to freely explore and share their creations. The increased accessibility further expanded the hype in image synthesis, fostering novel use cases and commercial applications that range from outpainting famous art pieces <cit.> (i.e., synthetically extending the borders of an image) to designing political campaign ads <cit.>. With the increased interest in the topic from research and industry communities and the rapid development of techniques, one can safely assume that not all synthetic images are born equally. We can categorize existing approaches by how the generation task is conditioned and what family of AI models they rely on. The generation task defines the goal of the method and, consequently, how it learns to map the expected input to a synthetic output image. Additionally, the input data modality conditions the generation process into expressing particular visual concepts and characteristics desired by the user <cit.>. The most common tasks fall into text-to-image or image-to-image generation. Popularized by recent applications such as MidJourney <cit.> and DreamStudio <cit.>, text-to-image generation involves a natural language prompt describing the desired image. This often includes the object or concepts that should be created, the desired artistic style, and the feeling the composition should convey. Whereas, in image-to-image generation, guidance may come as visual information, such as a picture, semantic map, or body pose keypoints. These may aid the generation process with information often difficult to express by natural language prompts, such as the relative positioning of the elements. We show examples of conditioning tasks and modalities in Figure <ref>. The families of AI models utilized in image generation techniques comprise several types of architectures, with the most common being Generative Adversarial Networks (GANs), Diffusion models, and Variational Autoencoders (VAE). GANs <cit.> are built on two components: a generator and a discriminator. The generator learns the underlying distribution of real examples to generate new data, while the discriminator decides whether the input is from the real data space. Through an adversarial training process, the discriminator learns to identify synthetic images, while the generator progressively improves its ability to produce high-quality images that can deceive its counterpart. Numerous GAN variants have been developed in recent years to enhance the performance and stability of image generation <cit.>. Among them, StyleGAN <cit.> allowed for intuitive control over the generated image attributes by modulating the convolutional kernels at different levels of the generator, instead of directly controlling the network input. Its successor, StyleGAN-T <cit.>, builds upon its architecture for text-guided image synthesis. It leverages Contrastive Language-Image Pre-Training <cit.> (CLIP), a powerful text encoder that aligns textual descriptions with corresponding images. Other approaches <cit.> follow a similar path, relying on CLIP to integrate natural language understanding into the image synthesis process. As an alternative to the min-max optimization game of GANs, Diffusion models <cit.> are trained to revert a stochastic diffusion process that progressively adds noise to a target image. To generate new images, the model iteratively denoises the perturbed image at each step, until a high-quality picture is reconstructed. By relying on a deterministic denoising function instead of adversarial learning, their training is more stable and easier to control than GANs. On the other hand, diffusion models rely on multiple network passes to reconstruct samples, constituting a considerably more computationally expensive method than adversarial networks. To improve efficiency, Stable Diffusion <cit.> operates on compressed latent representations instead of pixel space, mapping the denoising function to smaller manifolds. When considering textual prompts, Imagen <cit.> leverages text encoders, such as CLIP, to combine them with multiple cascaded diffusion models to generate high-resolution outputs from text. Similarly, Ramesh et al. <cit.> train a diffusion decoder that produces images from CLIP embeddings extracted from textual prompts. Another prominent family of models is Variational Autoencoders (VAE) <cit.>. Autoencoders follow an encoder-decoder architecture that projects the input data into a low-dimensional latent space and learns to reconstruct the original input from it. VAE, in turn, extends upon autoencoders by adding a probabilistic component to the latent representation. Instead of learning a deterministic encoding for each input, the network learns the parameters of a probabilistic distribution that models the latent space. By doing so, it can sample from the latent space distribution to generate new samples. Constraining the low-dimensional latent space further, VQ-VAE <cit.> uses vector quantization to learn discrete latent variables, which improves the interpretability of the learned concepts and allows for easily manipulating them when generating new compositions. Building on top of the previous technique, DALL-E <cit.> and CogView <cit.> address text-to-image generation by combining the rich representations learned by variations of VQ-VAE with the predictive capabilities of Transformers <cit.>. Both approaches use Transformer modules to predict the best image tokens from the VQ-VAE codebook, given a textual token and previously selected visual tokens. This results in coherent and semantically meaningful synthesized creations. Each of these AI model families has its pros and cons. GANs excel at generating sharp and visually compelling images, but they may suffer from mode collapse and training instability issues when used in large and diverse datasets <cit.>. Diffusion models offer a powerful approach to generating high-quality visual data, but they can be computationally expensive due to their iterative nature. VAEs provide a more straightforward training process with a clear optimization objective, but they may generate less sharp images than GANs and diffusion models. Nonetheless, all of them made significant advancements in the field of image generation, enabling the synthesis of realistic and diverse pictures. As the interest in this area increases, more advances will come, and the realism gap between real and synthetic data will shorten to the point that distinguishing between them will be challenging. This poses numerous problems in assessing the reliability and authenticity of visual content in an increasingly digital world. With this in mind, we discuss existing forensic approaches that may help to identify synthetic creations in the next section. §.§ Synthetic Images Detection In contrast to old-fashioned types of image manipulation, modern synthetic images take their realism to higher standards. Figure <ref> compares classic Digital Forensics cases with those created by generative models, showcasing the remarkable advancement achieved. The level of refinement and potential harm associated with these synthetic images raise concerns about the capability of Digital Forensics to identify such content. However, we anticipate that Locard's exchange principle still holds for synthetic imagery, with forensic traces taking the form of visual inconsistencies and artifacts left by the generation process. Nevertheless, as these models continue to evolve and new forms of counter-forensics attacks emerge, the validity of this claim may be challenged. In this section, our analysis focuses on examining possible traces left by GANs and other generative models. We categorize our study based on the types of forensic evidence utilized for detection, namely Visual Artifacts and Noise Fingerprints. §.§.§ Visual Artifacts Despite the impressive realism of cutting-edge synthetic images, a closer look reveals various aberrational results and visual inconsistencies. Borji <cit.> has presented several image failures that can occur when generating synthetic content, even with recent generative models such as DALL-E 2 <cit.>, Midjourney <cit.>, and StableDiffusion <cit.>. These failures may occur in the background, reflections, lighting, shadows, text, body parts, and objects, as depicted by Figure <ref>. This figure illustrates clues that have been explored by Digital Forensics researchers to detect synthetic content. Farid, for instance, analyzed the 3D illumination <cit.> and geometric <cit.> consistency of structures and objects in a photograph generated by state-of-the-art generative models. His analysis employed classic digital forensics techniques for on-scene illumination and 3D geometric analysis, similar to those used in previous classic forensic works <cit.>. By doing so, Farid showed that while the local structures in the photo may be globally consistent, they exhibit local inconsistencies that serve as valuable clues for forensic analysts. However, as generative models continue to evolve, it is expected that the visual inconsistencies and artifacts observed in synthetic images will eventually become rarer or imperceptible, as demonstrated in the case of the AI-synthetic faces examined by Nightingale and Farid <cit.>. This situation asks for other strategies that rely on other types of forensic clues, such as the noise left by the generation processes. §.§.§ Noise Fingerprint As image synthesis is rapidly improving, it is crucial to employ a variety of alternative detectors that explicitly exploit different characteristics of synthetic images. Therefore, as an alternative to visual artifacts, noise-based detectors have been a promising path to expose synthetic content. In this direction, Marra et al. <cit.> investigated statistics-based techniques to detect potential noise fingerprints left by GANs on their generated content. By utilizing photo response non-uniformity (PRNU) analysis, similar to camera attribution methods, they discovered a correlation between residual noise patterns and specific GAN models. Furthermore, Marra et al. demonstrated the feasibility of differentiating between distinct GAN models used for image synthesis through residual noise analysis, enabling GAN model attribution. In a similar study, Mandelli et al. <cit.> revealed that comparable residual noise patterns could be leveraged to identify GAN-generated scientific images, indicating the potential extension of this approach to other image types beyond natural images. These findings emphasize the applicability of residual noise analysis in detecting and identifying synthetic images, contributing to the field of Digital Forensics. Noise signatures allied with visual clues were explored in tandem by Kong et al. <cit.>, showing that combining different evidence might be the way forward in dealing with the challenges of synthetic realities detection. In a more recent investigation, Corvi et al. <cit.> examined the presence of fingerprints left by state-of-the-art generative models, including GAN-based and Diffusion-based models. Their findings reveal that no generative model appears to be completely artifact-free at present. Both GAN-generated and Diffusion-generated images exhibit anomalous periodic patterns in the Fourier spatial domain. However, as highlighted by Gragnaniello et al. <cit.>, such artifacts may be challenging to detect when post-processing operations are applied, such as image resizing and compression. These operations are frequently employed on social media platforms to save storage and speed up sharing, further complicating the identification of these visual irregularities. Besides investigating how post-processing operations impact synthetic image detectors, researchers have also identified the potential for synthetic image detectors to be deceived through counter-forensics attacks. §.§.§ Counter Forensics In a recent study that challenges noise-based detection methods, Osakabe et al. <cit.> developed a GAN model that can generate images without “checkerboard artifacts”, a specific type of artifact in the Fourier domain that is common in synthetic images. They achieved this by incorporating a fixed convolutional layer into every upsampling and downsampling layer of the GAN architecture. Remarkably, their model successfully fooled a detector that previously identified fake images with 92% accuracy, reducing the accuracy to a mere 12%. Similarly, Cozzolino et al. <cit.> demonstrated that synthetic image detectors could be fooled by transferring residual noise fingerprints from real cameras onto GAN-generated images. This process produces a spoofed image that can avoid accurate GAN detectors and camera-model identifiers, causing the image to be misattributed as originating from the transferred camera model. As counter-forensic attacks indicate, synthetic image detection presents multiple research challenges to ensure media integrity and prevent images from being used by a malicious actor. §.§ Challenges and Directions As with classic edited images, creating synthetic realities using cutting-edge generative models has sparked ethical debates and raised concerns about their use. Once again, Digital Forensics plays a crucial role in this debate by investigating ways to detect the traces of artificial intelligence techniques left in these images. While most synthetic images can be identified through a close look into scene inconsistencies and object aberrations, as depicted in Figure <ref>, the advancement of image synthesis will inevitably render these visual incongruences invisible to the naked eye. Consequently, researchers also rely on new types of fingerprints inherently left by the generation process, such as specific patterns on the Fourier spatial spectrum and residual noise analysis. Some of these artifacts can be compared to the PRNU noise left by camera sensors. They can aid forensic analysis not only in detecting fake images but also in identifying the specific generative model used to render them (source attribution). However, traces alone provide a vulnerable target for synthetic image detection, as they can easily be manipulated to deceive accurate fake image detectors. Such attacks may involve common post-processing operations or sophisticated techniques like camera noise transference. Therefore, it is imperative for forensic researchers to develop robust techniques capable of detecting and distinguishing these artifacts, even in the presence of common post-processing operations or more sophisticated attacks. A more challenging and socially responsible aspect of forensics involves preventing the harmful applications of synthetic images. Given the ease with which such content can be created and shared, it is essential for forensics researchers to design traceable techniques whose synthetic images can be readily distinguished. Traceable evidence would assist analysts in swiftly identifying the source and author of such content, thereby preventing its widespread dissemination. In this vein, researchers have developed deep learning-based watermarking approaches <cit.> to identify synthesized content. These methods use encoder layers to imbue watermark information in the image pixels without perceptually altering its content. For successfully marking creations, researchers aim to be robust to most online alterations, such as compression, cropping, and intensity changes. Unfortunately, most approaches act from the generator's perspective, either adapting existing models or adding external components in the generation process to enable watermarking. This might be viable for well-established and commercial applications (e.g., Midjourney <cit.>) but will be hardly used with open-sourced models that are trained and distributed by the community. § SYNTHETIC VIDEOS The ability to generate realistic and useful videos holds immense value across various application domains such as entertainment, virtual reality, and education <cit.>. Undoubtedly, video generation techniques have made significant positive contributions in these domains. The advancements have opened up new avenues for creativity, synthetic realities, and immersive experiences. However, it is essential to acknowledge that along with their benefits, these techniques also raise potential security concerns. Synthesize realistic videos can be exploited for malicious purposes, such as financial fraud and the dissemination of fake news. Consequently, ensuring the integrity and authenticity of digital content becomes increasingly critical. Generally speaking, video synthesis can be divided into video generation and text-to-video synthesis. Previous methods for video generation mainly employ GANs <cit.> and VAEs <cit.> to generate videos. But with the advent of diffusion models, recent methods explore them to generate more realistic videos <cit.>. On the other hand, text-to-video synthesis incorporates text information to guide the model in generating video content that is responsive to specific demands. Analogously, previous text-to-video synthesis methods have resorted to GANs <cit.> and diffusion models <cit.>, achieving exceptional generation quality in terms of fidelity, resolution, and temporal consistency. In this section, we provide a review of the methods for video generation, text-to-video synthesis, synthetic video detection and discuss their challenges. We further outline possible future research directions for synthetic video generation and detection techniques. §.§ Video Generation In the pursuit of advancing video synthesis, previous research has extensively explored diverse generative models, including Generative Adversarial Networks (GANs) <cit.>, autoregressive models <cit.>, and implicit neural representations <cit.>. However, recent attention has been drawn to the exceptional achievements of diffusion models in visual data synthesis. Several notable works propose outstanding video generation methods and investigate their practical applications <cit.>. For instance, a pioneering work on diffusion video generation <cit.> primarily focuses on network architecture modifications to extend image synthesis to video. The 3D U-Net is adopted <cit.> and achieves outstanding generation results in two cases, including unconditional and text-conditional video generation. For longer video generation, they apply an autoregressive approach, where subsequent video segments are conditioned on the preceding ones. Another example of a diffusion video generation work <cit.> adopts frame-by-frame video generation models. To evaluate different prediction strategies, the authors conduct an ablation study to determine whether predicting the residual of the next frame yields superior results compared to predicting the actual frame. Furthermore, Hoppe et al. <cit.> introduced the Random Mask Video Diffusion (RaMViD) technique, which can be utilized for both video generation and infilling tasks. The unmasked frames are used to enforce conditions on the diffusion process, while the masked frames undergo diffusion through the forward process. By employing this training strategy, RaMViD demonstrates outstanding video generation quality. These recent advancements in diffusion-based video generation highlight the potential of this family of models to push the boundaries of video synthesis, addressing the challenges of generating realistic and diverse video content in synthetic realities. §.§ Text-to-video Synthesis Text-to-video models are highly data-hungry, which require massive amounts of data to learn caption relatedness, frame photorealism, and temporal dynamics <cit.>. However, video data resources are comparatively more limited in terms of style, volume, and quality. This scarcity of video data poses significant challenges for training text-to-video generation models. To overcome these challenges, additional controls are often incorporated to enhance the responsiveness of generated videos to user demands <cit.>. Early text-to-video generation models heavily relied on convolutional GAN models combined with Recurrent Neural Networks (RNNs) to capture temporal dynamics <cit.>. Despite the introduction of complex architectures and auxiliary losses, GAN-based models exhibit limitations in generating videos beyond simplistic scenes involving digit movements or close-up actions. To that end, recent advancements in the field have aimed to extend text-to-video generation to more diverse domains using large-scale transformers <cit.> or diffusion models <cit.>. These approaches provide promising directions for generating more complex and realistic video content by leveraging the expressive power of these advanced network architectures. However, modeling high-dimensional videos and addressing the scarcity of text-video datasets present considerable challenges in training text-to-video generation models from scratch. To tackle this issue, most approaches adopt a transfer learning paradigm, leveraging pre-trained text-to-image models to acquire knowledge and improve performance. For instance, CogVideo <cit.> builds upon the pre-trained text-to-image model CogView2 <cit.>, while Imagen Video <cit.> and Phenaki <cit.> employ joint image-video training techniques to leverage pre-existing visual representations. In contrast, Make-A-Video <cit.> focuses on learning motion solely from video data, reducing the reliance on text-video pairs for training. Another key consideration in video synthesis is the high computational cost associated with generating high-quality videos. To mitigate this issue, latent diffusion has emerged as a popular technique for video generation, as it offers a computationally efficient alternative <cit.>. Various powerful but computational-efficient methods, such as MagicVideo <cit.>, which introduces a simple adaptor after the 2D convolution layer, and Latent-Shift <cit.>, which incorporates a parameter-free temporal shift module, have successfully utilized latent diffusion for video synthesis. Additionally, PDVM <cit.> adopts a novel approach of projecting the 3D video latent space into three 2D image-like latent spaces, further optimizing the computational cost of the video generation process. Despite the active research in text-to-video generation, existing studies have predominantly overlooked the interplay and intrinsic correlation between spatial and temporal modules. These modules play crucial roles in understanding the complex dynamics of videos and ensuring coherent and realistic video generation. §.§ Synthetic Video Detection Again, detecting synthetic videos relies on the fingerprints left by generative models, which have been explored extensively in the context of synthetic image detection. Existing research in synthetic image detection has shown promise by identifying inconsistencies in illumination and geometric structure <cit.>, as well as specific noise patterns in the Fourier domain <cit.>. However, the extension of image-based detection techniques to videos is still in its early stages. One straightforward approach for extending image-based detection to videos is through frame-level voting, where each frame is individually analyzed and classified as real or synthetic. However, exploiting temporal information, such as temporal coherence, presents a significant challenge. The temporal domain contains valuable cues that can aid in distinguishing synthetic videos from real ones. For instance, temporal coherence refers to the consistent motion and flow of objects across frames in a real video. Detecting such temporal fingerprints could provide valuable insights into the authenticity of a video. The temporal coherence, which is a significant challenge in the field of video synthesis <cit.>, shall also be vital for synthetic video detection. Consequently, exploiting the temporal inconsistency can be utilized to identify generated videos. Currently, there is no existing method specifically designed for detecting video synthesis. However, there have been relevant work on detecting Deepfake videos using generic architectures. For instance, 3DCNN <cit.>, RNN <cit.>, LSTM <cit.>, and temporal transformer <cit.> have been widely employed for video-level Deepfake detection. Additionally, some studies focus on detecting deepfake videos by examining specific temporal artifacts like lip movement <cit.>, rPPG artifacts <cit.>, and head pose inconsistency <cit.>. Thus, how to explore the temporal artifacts (e.g., unnatural motions) in generated videos to detect video synthesis is an intriguing task. This temporal aspect of video analysis introduces additional complexities compared to image analysis. Generated video content can exhibit various artifacts due to the processing techniques employed, including both handcrafted designs and deep neural networks. These artifacts, including blur, compression artifacts, and noise, can be intentionally or unintentionally injected during the video generation process. Consequently, these artifacts may pose significant obstacles to the detection of synthetic videos, particularly when the detection model is trained solely on high-quality video data. To address this challenge, detection models need to be highly generalized to handle data from different domains. The models must be capable of recognizing and adapting to various levels of quality, distortion types, and content sources. This requirement calls for the application of domain generalization techniques, which enable the model to generalize well beyond the training data distribution. By training the model on a diverse range of video data, encompassing different quality levels, distortion types, and content sources, the detection system can become more robust and effective in identifying synthetic videos across a wide range of scenarios. Overall, the extension of image-based detection techniques to videos presents a meaningful yet challenging direction for research. Leveraging temporal fingerprints and addressing the presence of artifacts in generated videos require novel approaches and further exploration. Developing detection models that can effectively analyze and distinguish synthetic videos while being adaptable to various domains will play a crucial role in combating the increasing threat of synthetic videos in today's digital landscape. §.§ Challenges and Directions Recent advancements in diffusion models have revolutionized text-to-video synthesis, achieving remarkable capabilities that surpass previous state-of-the-art approaches and deliver unprecedented generative performance. This breakthrough has significantly enhanced the quality and fidelity of generated videos. However, as we delve deeper into this domain, it becomes evident that there is a need for further research and development. Despite the demonstrated success in generated image content in the past few years, video generation is still in its infancy. As we have discussed, the challenges of video generation mainly lie in the following three aspects: (1) lacking large-scale, diverse, and in-the-wild video datasets; (2) demanding computational costs; and (3) unstable in synthesizing coherent content from both spatial and temporal perspectives. In the realm of synthetic video detection, existing methods predominantly focus on identifying face forgery, where the manipulation targets primarily involve facial features and expressions. However, to address the evolving landscape of synthetic videos and their potential threats, it is imperative to explore detection techniques that encompass a broader range of scenes and contexts. Detecting synthetic videos with diverse scenes, objects, and backgrounds poses an interesting avenue for future advancements in forensic research. By expanding the scope of detection techniques, we can develop robust and comprehensive methods that effectively identify and mitigate the risks associated with the increasing sophistication of synthetic videos. § SYNTHETIC AUDIO Synthetic realities in audio are an emerging technology transforming how we experience sound. From augmented and virtual reality to interactive audio installations, synthetic realities offer a new dimension to our auditory senses. These immersive audio experiences create a simulated environment that can transport listeners to different worlds, trigger emotions, and enhance storytelling. With the advancements in audio technology and the increasing demand for immersive experiences, synthetic realities are poised to revolutionize the entertainment, gaming, and education industries. While the rise of synthetic audio technology has brought about significant benefits to various fields, it also presents a considerable threat to the integrity of our society. One of the most concerning implications is the potential misuse of this technology by malicious actors, who can exploit it for nefarious purposes such as telecommunication fraud. Cai et al. <cit.> have highlighted the dangers of using generative models to create fake audio that can deceive individuals and organizations, leading to financial losses and reputational damage. The use of synthetic audio in such fraudulent activities underscores the urgent need for developing robust and reliable methods for detecting and mitigating the harms of this technology. In this section, we propose to survey synthetic realities in audio and dive into the possibilities and challenges of this rapidly evolving field. §.§ Synthetic Audio Generation Audio synthesis is a vital and rapidly evolving research area with a wide range of applications, including text-to-speech (TTS), speech enhancement, voice conversion, and binaural audio synthesis. In the field of TTS, previous works have extensively utilized deep learning-based architectures such as WaveNet <cit.> and Clarinet <cit.>, as well as transformer models like FastSpeech <cit.> and Neural TTS <cit.>, and variational autoencoder (VAE) approaches such as MultiSpeech <cit.> and Hierarchical VAE <cit.>. Recently, diffusion models have gained prominence in addressing TTS problems, with notable contributions from WaveGrad <cit.>, DiffWave <cit.>, Gradient Flow <cit.>, and Diffusion TTS <cit.>. Speech enhancement techniques aim to improve speech recognition system performance by mitigating the impacts of ambient noise. The advancement of generative models has led to the development of various approaches for speech enhancement. These include GAN-based methods like MetricGAN <cit.> and Speech Enhancement GAN <cit.>, as well as diffusion-based models such as Storm <cit.>, Conditional Diffusion <cit.>, and Cold Filter <cit.>. These models have exhibited general and robust speech enhancement performance. Voice conversion, another critical task in speech synthesis, aims to transform the voice of one speaker into that of another. Different approaches have been explored for voice conversion, including transformer-based models (VoiceFilter <cit.>), GAN-based methods (CycleGAN-VC <cit.>, VoiceGAN <cit.>), and VAE-based techniques (ACVAE <cit.> and Neural Voice Cloning <cit.>). These methods facilitate the manipulation of speaker characteristics while maintaining the linguistic content of the speech. Lastly, binaural audio synthesis <cit.> focuses on transforming mono audio signals into binaural audio, which enables accurate sound localization and immersive auditory experiences. By simulating the perception of sound through two ears, binaural audio synthesis contributes to creating a more realistic and interactive auditory environment. Overall, the continuous advancements in deep learning, generative models, and various synthesis techniques have significantly expanded the possibilities and applications of audio synthesis, enhancing the quality, naturalness, and versatility of synthesized speech and audio. §.§ Synthetic Audio Detection Existing methods for synthetic audio detection can be categorized into two different streams: feature-based and image-based methods <cit.>. Feature-based approaches describe the audio through signal features such as Mel frequency cepstral coefficient (MFCC) and constant Q cepstral coefficient (CQCC) <cit.>. These features are then fed into typical classifiers (e.g., support vector machines) <cit.> and deep neural networks <cit.>), which are trained to detect synthetic audio. Image-based methods, on the other hand, utilize either spectrogram images <cit.> computed from the audio signal and use them as the input for deep neural networks to extract discriminative information for synthetic audio detection. The aforementioned techniques have also been widely applied to deepfake detection related to synthetic audio <cit.> (more detail can be found in Section VII). §.§ Challenges and Directions Despite the progress made in synthetic audio detection, there are still challenges to overcome. One of the critical challenges is the availability of large datasets of synthetic audio that can be used to train detection models effectively <cit.>. Generating a large dataset of synthetic audio can be time-consuming and resource-intensive. Additionally, the increasing complexity and sophistication of synthetic audio algorithms may require more advanced detection methods that can keep up with these advancements. While there are challenges to overcome, such as the availability of large datasets and the need for more advanced detection methods, recent research has shown promising results in developing more effective detection techniques. As synthetic audio technology advances, it is essential to continue developing and improving detection methods to prevent the misuse of deepfake audio and ensure that this technology is used safely and responsibly. § SYNTHETIC TEXT Large Language Models (LLMs) have revolutionized artificial intelligence, marking a significant milestone in the field. Since the rise of GPT models <cit.>, competitors from other companies like Google and Microsoft have also developed their own LLM, including Gopher <cit.>, GLaM <cit.>, LaMDA <cit.>, Megatron-Turing NLG <cit.>, Chinchilla <cit.>, PaLM <cit.>, BloombergGPT <cit.>. Some open-source models have also been released, including Jurassic-1 <cit.>, GPT-J <cit.>, GPT-NeoX <cit.>, BLOOM <cit.>, OPT <cit.>, LLaMA <cit.>, and OpenAssistant <cit.>. Although there are many variants of LLMs models, most of them have the same architecture of the GPT-2 model <cit.>, which is an auto-regressive model (decoder-only transformer) that generates the next word based on previous inputs. Figure <ref> illustrates how these models have reached a quality comparable to humans, even in highly complex tasks such as poem writing, as both poems seem to be written by an excellent poet. The widespread adoption of LLMs has been observed across diverse domains, including Medicine <cit.>, Journalism <cit.>, and Science <cit.>. These powerful tools possess immense potential to enhance human capabilities in various areas, ranging from code development (e.g., GitHub Copilot[<https://github.com/features/copilot>]) to combating online hate speech and harassment (e.g., Cohere Classify [<https://txt.cohere.com/content-moderation-classify/>]). While they offer numerous benefits, there is also a risk of them being used to produce harmful content, either deliberately by malicious actors or inadvertently due to their inherent flaws. In this section, we will investigate potential flaws and damaging applications of LLMs through a forensic lens. Our exploration will encompass the emergence of threats, machine-generated text detectors, and the underlying research challenges. §.§ Large Language Models Threats Text generation, like any form of machine-generated content, possesses inherent scalability, granting it the power to be employed in both beneficial and detrimental ways. While AI-generated text may still exhibit semantic flaws or hallucinations <cit.>, it has become increasingly difficult to differentiate between human and machine-generated text. This convergence of quality between human and AI-generated text poses a significant concern, particularly in the hands of malicious actors. In a comprehensive study about computer-generated text threat modeling, Crothers et al.  <cit.> categorize various types of attacks facilitated by large language models (LLMs). They group these attacks into four primary threats: (1) Facilitating Malware and Social Engineering; (2) Spam and Harassment; (3) Online Influence Campaigns; and (4) Exploiting AI authorship. While we will enumerate some of these threats within this section, it is worth noticing that LLMs have opened up a wide range of possibilities for malicious actors, extending beyond the scope of our enumerated list. §.§.§ Facilitating Malware and Social Engineering This threat makes use of LLMs for facilitating scalable and customizable scams, making them a significant threat in the realm of malware and social engineering <cit.>. By leveraging techniques like fine-tuning and prompt engineering, malicious actors can generate tailored scams that are highly convincing and appealing to specific targets or communities. For instance, by incorporating social media data from a target's profile, such as their interests, lifestyle, and social connections, LLMs can create more sophisticated and personalized scams that manipulate individuals into taking harmful actions or providing sensitive information. Another notable threat within this category is Data Poisoning. It involves the injection of exploitable data into the training process of LLMs or fine-tuning them with malicious intent. Schuster et al. <cit.> demonstrated a possible attack that poisons code-completion models (e.g., GitHub Co-pilot), making them include code vulnerabilities in their output, which attackers can later exploit. Such attacks open an important discussion about training datasets, as they are often gathered from the web without any rigorous curation. §.§.§ Spam and Harassment This threat weaponizes LLMs through trolls and hateful communities to propagate toxic content, disseminate misinformation, and target specific communities for harassment. An example of such an attack is the creation of GPT-4chan, as highlighted by Yannic Kilcher in his video “This is the worst AI ever” <cit.>. Kilcher fine-tuned GPT-J using data collected from the /pol/ channel on 4chan, a controversial online platform forum channel. The resulting model was used to interact with users on the same channel, encapsulating the offensive, nihilistic, and trolling nature that characterizes many /pol/ posts <cit.>. Kilcher's experiment raised the alarm to the scientific community on the ease with which LLMs can be misused and the potential consequences of such actions <cit.>. It emphasized the need for careful consideration and ethical responsibility when deploying and sharing LLMs, as they can be harnessed to amplify harm and propagate hateful ideologies. §.§.§ Online Influence Campaigns The utilization of LLMs for spreading fake news and manipulating public opinion has emerged as a significant concern. Political campaigns, in particular, could be a perilous case through the use of LLMs. Bai et al. <cit.> have demonstrated the susceptibility of individuals to persuasion on political matters when exposed to tailored messages generated by LLMs. Malicious actors could use this phenomenon in a devastating scenario to influence elections and other democratic processes. §.§.§ Exploiting AI authorship A intriguing threat of LLMs is academic articles generations. One can remind the case of SCIgen (2005), where MIT graduate students developed a system for automatically generating computer science papers, demonstrating the vulnerability of academic conferences to such submissions <cit.>. There is a growing concern that LLMs could be exploited to generate much more sophisticated fake articles than SCIGen, compromising scientific integrity. Research integrity experts fear that paper mills[Potentially illegal organizations that offer ghostwritten fraudulent or fabricated manuscripts <cit.>.] will improve their production in quality and quantity by using LLMs <cit.>. §.§ Detection Methods A few detection methods have been proposed for LLMs generated content. One of the pioneering approaches is GROVER, proposed by Zellers et al. <cit.>. GROVER was capable of generating fluent and highly realistic fake articles using LLMs, which motivated the authors to explore detection techniques for such content. Zellers et al. found that employing the same model used for generating the text achieved higher detection accuracy compared to using a different one. Their results demonstrated an impressive accuracy rate of 92% in detecting LLM-generated fake articles. However, over the past few years, models' ability to generate text has significantly advanced, raising a bigger challenge. One recent approach, DetectGPT, introduced by Mitchell et al. <cit.>, aims to address this challenge by detecting whether a given passage is generated by a specific model. The method is based on the hypothesis that AI-generated text exhibits a more negative log probability curvature compared to human-written text. To validate this hypothesis, Mitchell et al. proposed an approximation method for estimating the Hessian trace of the log probability function for both model-generated and human-written text, yielding promising results. However, a limitation of their approach is the requirement of knowing the specific generator model, which may not always be feasible in practice. In recent research efforts, there has been a specific focus on ChatGPT-generated text due to its global attention. In <cit.>, Mitrović et al. focused on detecting short texts such as online reviews. They employed a transformer-based model and applied an explanation method (SHAP <cit.>) to gain insights into distinguishing between human-written and machine-generated text. They found that detecting machine-generated text becomes more challenging when it is paraphrased from human text, where a human provides the initial text and asks the model to improve it. Additionally, the authors noted that ChatGPT tends to use uncommon words, exhibits politeness and impersonality, and lacks human-like emotional expressions. Another related study <cit.> explored the AI ability in text paraphrasing using GPT-3 and T5 models. They generated machine-paraphrased text and evaluated human performance in detecting these generated texts. The study showed that humans could not accurately detect GPT-3 paraphrased text, with accuracy only slightly above random (53%). In response to concerns about AI-generated text, several proprietary tools have emerged to address the detection of AI-authored content. One such tool is GPTZero, which has gained attention in the media as a promising method for identifying AI-generated text <cit.>. However, we were unable to locate the source code or a scientific article detailing their approach. Similarly, numerous applications have been developed claiming to detect AI-generated text, such as GPTkit[<https://gptkit.ai/>], Illuminarty[<https://illuminarty.ai/en/text/ai-generated-text-detection.html>], OpenAI's AI Text Classifier[<https://platform.openai.com/ai-text-classifier>], and AICheatCheck [<https://www.aicheatcheck.com/>]. However, many of these tools lack comprehensive studies on the reliability of their detection methods. In an effort to facilitate the detection of ChatGPT-generated content, Yu et al. <cit.> released a large dataset specifically designed for the identification of ChatGPT-written abstracts. This dataset includes over 35,000 synthetic abstracts generated by ChatGPT, comprising fully generated texts, polished outputs, and mixtures of human-written and machine-generated abstracts. Additionally, the dataset contains more than 15,000 human-written abstracts for comparison. The results of their detection experiments demonstrated the ability to identify content that was entirely generated by ChatGPT. However, the task becomes more challenging when the generated text is mixed with human-written content. This work provides a valuable dataset into the complexities of detecting machine-generated text, particularly in scenarios involving a combination of human and AI-authored content. A potential solution to address the misuse of LLMs is the use of text watermarks <cit.>. Grinbaum et al. <cit.> argue that machine-generated long texts should include a watermark to indicate their source and ensure transparency. In <cit.>, Kirchenbauer et al. propose embedding watermarks by modifying the sampling rules of next-word prediction. They use a hash function and pseudo-random generator to assign random colors (green and red) to words in the vocabulary. During next-word prediction, words from the red list are prohibited from appearing. However, they acknowledge the difficulty of a watermarking low-entropy text, as substituting a prohibited red word in such cases could result in poor quality output with high perplexity. To address this, they suggest a soft rule that encourages substituting red words in a high-entropy text. A third party familiar with the hash function and random generator can easily determine the colors of words by computing them. This detection method does not require knowledge of the specific generation model, making it a cheaper and more straightforward approach. Although watermarking shows promise as a solution, it is important to consider that it modifies the output text. The method proposed in <cit.> evaluates quality based on perplexity, but there is a possibility that the meaning and semantics of the output may be altered due to the watermarking process. §.§ Challenges and Directions Detecting machine-generated text versus human-written text poses increasing challenges as LLMs continue to improve their ability to mimic human language <cit.>. Several challenges in this regard are highlighted below: * Generalization: Detection methods often lack generalizability, meaning that a method developed to detect text generated by one specific model may not easily transfer to detecting text generated by another model. However, in real-world scenarios, prior knowledge about the specific model generating the text is typically unavailable. * Mixed reality: Existing detection methods struggle when it comes to identifying machine-generated text that is mixed with human-written text. The combination of both types makes it more difficult to differentiate between them. * Adaptability: LLMs demonstrate high adaptability to given prompts, making it challenging for methods that rely on finding patterns in the generated text. Models can exhibit different personalities[https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions#system-rolehttps://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/chatgpt] and respond differently based on prompts, such as ChatGPT's ability to adopt various tones depending on the prompt (e.g., DAN[<https://www.mlyearning.org/dan-chatgpt-prompt/>]), even faking emotions and swear words[https://metaroids.com/learn/jailbreaking-chatgpt-everything-you-need-to-know/https://metaroids.com/learn/jailbreaking-chatgpt-everything-you-need-to-know/]. This adaptability further complicates detection efforts. Despite these challenges, the remarkable capabilities of LLMs also present great research opportunities for detecting synthetic text. Some potential directions include: * Differentiating machine-generated from human-written text: Achieving this requires collaborative efforts between humans and machines. Humans can contribute their technical knowledge of how models operate and how humans typically express themselves, facilitating the development of detection methods. * Attributing the source model of generated text: Just as humans exhibit distinct writing styles, different language models may possess unique traits when generating text. By differentiating text generated by different models, researchers can gain insights into the behaviors of each model and identify each model's fingerprint. * Fact-checking machine-generated text: LLMs often struggle with generating factual content due to limited training data and the prevalence of fictional stories in their training corpus. In addition, after models are trained, the knowledge stored in these models can quickly become outdated. Conducting fact-checking on machine-generated text is crucial to ensure the reliability of AI-generated information. § NERFS AND METAVERSE Neural Radiance Fields (NeRF) have emerged as an effective method for implicit volumetric scene representation, enabling learning from multiple viewing angles <cit.>. NeRF has been successfully applied in various domains, including transparent object grasping <cit.>, scene understanding and reasoning <cit.>, and clear representations in challenging scenarios <cit.>. Recent NeRF variants, such as NeRF-W <cit.> and Ha-NeRF <cit.>, have demonstrated their ability to reconstruct scenes from input with various perturbations. We can also see these developments as an example of synthetic realities, especially when we consider the possibility of totally synthesizing new “worlds”. §.§ NeRF for Metaverse Applications In the context of metaverse applications, NeRF has been utilized in virtual concerts <cit.> and metaverse platforms for architecture and urban planning <cit.>. However, challenges remain, such as the need for real-time rendering of complex scenes with multiple dynamic objects and efficient methods to handle large and challenging scenes. Further research is necessary to address these challenges and to explore new use cases for NeRF in metaverse development. One significant challenge to NeRF's applicability in metaverse development is its reliance on pre-computed camera parameters for scene representation. Several methods, such as NeRF <cit.> and BARF <cit.>, have been proposed to optimize camera parameters and scene representation. However, avoiding interference from undesired scenes during camera parameter optimization remains an unsolved problem. A potential research area for the future is building an occlusion-free scene reconstruction based on inaccurate or even unknown camera parameters, enabling greater flexibility in the use of NeRF for scene representation, leading to more effective applications in computer vision and graphics <cit.>. In a metaverse, where scenes are typically composed of multiple dynamic objects, avatars, and user interactions, occlusion-free scene reconstruction based on NeRF would enable a more thorough scene representation, resulting in more immersive and realistic virtual environments. Moreover, an occlusion-free scene reconstruction based on NeRF that does not rely on accurate camera parameters would enable greater flexibility in metaverse development, allowing designers to create and share virtual spaces more efficiently <cit.>. Finally, optimizing NeRF-based methods for real-time rendering of complex scenes with multiple dynamic objects would enable seamless user interactions in a metaverse, leading to a more responsive and interactive virtual environment <cit.>. In summary, an occlusion-free scene reconstruction based on NeRF has the potential to significantly benefit metaverse applications by enabling more thorough scene representation, flexibility in scene creation and sharing, and real-time performance and scalability. Further research in this area could lead to even more effective applications of NeRF in the context of metaverse development. §.§ Challenges and Limitations Despite the progress achieved thus far, there are also some challenges and limitations to consider when using NeRF in metaverse development. One significant challenge is the reliance on pre-computed camera parameters for scene representation <cit.>, making it infeasible when the pre-computation is not possible. This can limit the flexibility of scene creation and sharing. Although some solutions <cit.> have been proposed to optimize camera parameters along with scene representation, avoiding interference from undesired scenes during camera parameter optimization remains an unsolved problem <cit.>. Another limitation of NeRF is its computational cost. NeRF-based methods require significant computational resources and can suffer from slow rendering times <cit.>, limiting their real-time performance for complex scenes. This can be a considerable challenge for metaverse applications, where real-time performance is critical for a seamless user experience. Moreover, NeRF-based methods may not be suitable for all types of scenes. Scenes with complex geometry, occlusions <cit.>, and dynamic objects <cit.> can pose challenges for NeRF-based methods, leading to incomplete scene representation and rendering. Though several image restoration methods <cit.> have been proposed, they are far from being practical solutions for NeRF and its variants. Therefore, it is essential to carefully evaluate the suitability of NeRF-based methods for a given scene and application. Besides, there are also some potential adverse impacts regarding its broader societal implications. One potential concern is the potential for NeRF-based metaverse applications to become addictive and negatively impact mental health. The immersive and interactive nature of metaverse environments, combined with the potential for NeRF to create highly realistic and detailed scenes, could create a compelling and addictive experience for users. This could negatively impact mental health <cit.>, including addiction, social isolation, and other adverse effects associated with prolonged use of virtual environments. Another potential concern is the impact of NeRF-based metaverse applications on social dynamics and inequality <cit.>. NeRF-based metaverse applications could potentially exacerbate existing social inequalities and create new ones. For example, access to high-quality hardware and internet connectivity could become a barrier to participation in these environments, further marginalizing disadvantaged communities. Last, using NeRF-based metaverse applications has raised significant concerns from both privacy and forensic perspectives <cit.>. These applications have the potential to collect and store vast amounts of personal data, which could be exploited for targeted advertising, surveillance, and other forms of data mining, leading to further erosion of individual autonomy and privacy. This could result in new forms of digital inequality and harm. Furthermore, the difficulty of collecting and preserving evidence in the NeRF and Metaverse contexts has been discussed in recent literature <cit.>. Traditional forensic techniques may not be applicable in these virtual environments, where data are decentralized, and ownership is often unclear, making the determination of the chain of custody for digital assets within the Metaverse a complex and challenging task. Additionally, the potential for manipulating digital evidence within these environments raises concerns about the reliability and authenticity of such evidence, particularly with the use of deepfakes and synthetic media <cit.>. Therefore, there is a pressing need to develop new forensic techniques and tools to address these challenges and ensure the integrity and reliability of digital evidence in the NeRF and Metaverse contexts. § DEEPFAKES In the context of synthetic realities, one particular example of utmost attention is deepfakes. Deepfakes are synthetic media that are digitally manipulated to replace one person's identity or personal traits convincingly with that of another. Therefore, when synthetic media comprises the replacement of someone's biometric traits, we are referring to a deepfake. It is typically present in images, audio samples, and videos. §.§ Deepfake Images §.§.§ Deepfake image generation The issue of falsified image contents has been a long-standing problem in the image forensics area. With the emergence of deep learning, numerous powerful learning-based models are able to generate the so-called deepfake images with a high level of realism. In recent years, various deepfake techniques have been proposed, including image inpainting/removal, image composition, entire image synthesis, image translation, and text-to-image. Image inpainting/removal is used to fill in image regions with convincing content. Meanwhile, image composition, which encompasses object placement, image blending, image harmonization, and shadow generation, involves cutting out the foreground from one image and pasting it onto another image. Entire image synthesis involves the generation of images entirely by generative models such as GAN <cit.>, VAE  <cit.>, and diffusion models <cit.>. Image translation, on the other hand, enables the transfer of an image's style, such as converting a sketch image to a colored image. With the rapid development of diffusion models, the images generated based on text prompts are becoming increasingly realistic. Despite their remarkable quality, deepfake images can be misused for malicious purposes, leading to various security issues such as fake news and fraud. §.§.§ Deepfake image detection Deepfake image detection methods can be broadly classified into two categories: image-level and pixel-level detection. While image-level methods aim to identify the authenticity of the entire input image, pixel-level methods localize the manipulated regions. Traditional detection methods for detecting image manipulation in image inpainting/removal and image composition, rely on capturing artifacts based on prior knowledge, such as lens distortions <cit.>, CFA artifacts <cit.>, noise patterns <cit.>, compression artifacts <cit.>, etc. Learning-based methods have improved the detection performance by capturing noise prints <cit.>, JPEG features <cit.>, High-frequency (HF) artifacts <cit.>, and forgery boundary <cit.>. Additionally, detecting manipulated images generated through entire image synthesis, image translation, and text-to-image is another challenging problem. Various methods propose to extract visual artifacts <cit.>, color artifacts <cit.>, specific GAN fingerprints <cit.>, and spectral features <cit.> for generated image detection. Nevertheless, these methods have limitations in generalizing across different GANs. To address this issue, more general methods such as CNN and generalization methods have been proposed <cit.>. As image manipulation technology continues to advance, deepfake image detection is an essential field of research to prevent the spread of misinformation and protect the integrity of visual media. §.§.§ Challenges and future work Despite significant progress in deepfake image detection, there are still several challenges that need to be addressed. One of the main challenges is the generalization of deepfake detection models to unseen datasets and scenarios, which is crucial for practical applications. Another challenge is the robustness of these models against anti-forensics techniques such as recapturing and adversarial attacks. Moreover, the industry is now somewhat ahead of academia in terms of deploying deepfake detection technologies in real-world settings (e.g., ChatGPT). This gap can be narrowed by updating and creating more up-to-date deepfake databases, as most existing ones are somewhat outdated in the research community. Additionally, many deepfake detection models are not explainable, making it difficult to understand how they make decisions. Future work should aim to develop explainable models that can provide clear and interpretable justifications for their decisions. Overall, addressing these challenges can lead to more reliable and effective deepfake image detection systems in the future. §.§ Deepfake Video §.§.§ Deepfake video generation Recently, deepfake videos typically refer to manipulated face videos. Face information plays a vital role in human communication <cit.>. However, the spread of deepfake videos on social media platforms can result in significant security concerns due to the potential dissemination of disinformation and misinformation, posing tangible and pressing security concerns. Generally speaking, there are four primary categories of deepfake videos, which are identity swap, face reenactment, attribute manipulation, and entire synthesis <cit.>. These videos are generated using powerful generative models such as GAN <cit.>, VAE <cit.>, and diffusion models <cit.>, which are capable of producing highly sophisticated videos. Identity swap replaces the original face regions with target faces, while face reenactment transfers the source facial expression to the target one. Attribute manipulation can alter specific facial features like hair, eyeglasses, nose, etc. With the advent of foundation models, entire synthesized videos can be generated. Powerful deep learning tools have been used to create sophisticated deepfake video datasets like UADFV <cit.>, DF-TIMIT <cit.>, FaceForensics++ <cit.>, DFD <cit.>, DFDC <cit.>, Celeb-DF <cit.>, DF-Forensics-1.0 <cit.>, ForgeryNet <cit.>, FFIW <cit.>, KoDF <cit.>, and FakeAVCeleb <cit.>. As deepfake techniques continue to evolve, it is crucial to develop effective methods for detecting deepfake videos and preventing their malicious use. §.§.§ Deepfake video detection To counteract malicious deepfake attacks, many detection methods have been proposed. Traditional methods mainly focus on hand-crafted features, such as lack of eye-blinking <cit.> and warping artifacts <cit.>. However, these methods are not accurate enough. Learning-based methods such as convolutional neural networks (CNN) <cit.>, recurrent neural networks (RNN) <cit.>, and vision transformer (ViT) <cit.>, have been proposed to achieve more promising detection performance. Afchar et al. <cit.> designed MesoNet and MesoInception4 to detect Deepfake and Face2Face videos automatically. Besides, some generic networks such as Xception Net <cit.>, Efficient Net <cit.>, and Capsule Net <cit.> have been demonstrated effective on deepfake detection tasks. Subsequent works have employed RNN <cit.> and ViT <cit.> to further improve forgery detection accuracy. Other methods capture spatial artifacts <cit.>, frequency artifacts <cit.>, and biological signals <cit.> to perform deepfake detection. Follow-up works <cit.> focus on improving the generalization capability and robustness of the model. Temporal information has also been exploited in many deepfake video detection methods based on typical generic networks such as 3DCNN <cit.>, LSTM <cit.>, RNN <cit.>, and ViT <cit.>. Combining spatial and temporal information can achieve more reliable detection and improve the model's generalization capability. §.§.§ Challenges and future work Deepfake video creation and detection have seen significant success in recent years, but many issues remain unresolved. While accurate and secure, deepfake detectors lack interpretability, limiting their applications in practical scenarios. Localizing forgery regions and forgery frames is also a crucial yet understudied task. Additionally, the two-player nature of face forgery and forgery detection means that attack techniques will continue to become more powerful, thereby calling for more general detection methods. Furthermore, deepfake videos often involve audio manipulation, which is largely overlooked in existing methods. Therefore, more visual-audio joint datasets and multi-modal detectors are expected in future works. §.§ Deepfake Audio §.§.§ Deepfake audio generation Deepfake audio refers to manipulated or synthetic audio created using deep learning techniques. The aim of deepfake audio is to impersonate the speaker's speech characteristics, such as accent, timbre, and intonation, by learning from target voice resources. Traditional methods for audio manipulation involve removing, duplicating, copying within an audio sample, or pasting and inserting fragments into other audios. Deep learning-based speech synthesis makes the generated audio more realistic and difficult to distinguish from real ones. Subsequent models based on likelihood algorithms, such as WaveNet <cit.> and WaveGlow <cit.>, have been developed to perform audio generation. However, these methods often require conditional information and may fail to generate long signal sequences. Recent waveform generative models, such as GAN <cit.> and VAE <cit.>, take advantage of various auxiliary losses, thereby achieving superior generation performance. On the other hand, recent diffusion models (e.g., DiffWave <cit.>), have exhibited remarkable generation performance even in challenging unconditional and class-conditional waveform generation scenarios. In the context of text-to-speech tasks, diffusion models can be classified into: acoustic model (e.g., Diff-TTS <cit.>), vocoder (e.g., DiffWave <cit.>), and end-to-end framework (e.g., FastDiff <cit.>). The promising results of diffusion models indicate their potential to revolutionize the field of audio generation and synthesis. §.§.§ Deepfake audio detection Automatic Speaker Verification (ASV) systems <cit.> currently detect manipulated audio through three tasks: logical access (LA), physical access (PA), and speech deepfake (DF) <cit.>. The LA task focuses on detecting synthetic speech injected into a communication system, while the PA task includes acoustic propagation and real physical factors. The DF task aims to detect deepfake speech circulating on social media platforms. Traditional methods for detecting deepfake audio involve analyzing the spectrogram of the audio and exposing audio inconsistencies, such as abrupt changes in frequency or amplitude. These methods are based upon the assumption that synthetic audios have unique frequency and amplitude patterns. However, recent deep learning deepfake techniques raise the difficulty in identifying authenticity and call for the design of deep learning countermeasures. Generally speaking, deep learning methods can be categorized into feature-based, image-based, and waveform-based <cit.>. Feature-based methods utilize critical digital signal features, such as Mel-frequency cepstral coefficients (MFCCs) <cit.>, constant Q cepstral coefficient (CQCC) <cit.>, and energy, to detect deepfake audio. Image-based methods apply the spectrogram image of the signal to conduct inconsistency detection. Waveform-based methods aim to analyze the raw waveform of the audio signal instead. To facilitate the development of deepfake audio detection models, numerous databases, such as M-AILABS Speech <cit.>, GAN based synthesized audio dataset <cit.>, Half-Truth <cit.>, and H-Voice <cit.> have been created. Overall, deepfake audio detection is a challenging but important task that requires the constant development and refinement of detection methods. §.§.§ Challenges and future work Despite recent advancements in synthetic audio generation, there are still several challenges that need to be addressed. As generation techniques continue to evolve, deepfake audio will become increasingly difficult to distinguish by both human and AI-based detectors. Even worse, existing detection methods have shown poor robustness to compression, encoding, and noise. Additionally, current deepfake audio detection methods suffer from inefficient training datasets and overfitting issues <cit.>, resulting in limited generalization capability. Moreover, most detection methods extract specific features (such as MFCC, CQCC, and, energy) to conduct deepfake detection. However, it is challenging to extract appropriate features for specific detection tasks. How to effectively combine various features for more robust detection opens an important research path forward. Last but not least, it is crucial to conduct further research on the ethical implications of deepfake audio, such as its potential for misuse and its impacts on audio professionals. § CONCLUSION AND FINAL THOUGHTS Trust plays a fundamental role in our society. Citizens entrust infrastructures, services (including education and health), media (including social networks nowadays), the judiciary system (including law enforcement), and political decision-making in general. Democracies are endangered when citizens no longer trust the system and their elected representatives. Unfortunately, trust can be tampered with through influence or disinformation. Although disinformation has probably always existed in human history (e.g., spreading rumors to influence elections), the message's quality and scale were low, restricting its impact. However, in our contemporary digital world, disinformation (also coined fake news) with greater realistic content spreads at an unprecedented scale on the Internet through alternative media without any filtering by the traditional mainstream channels. With the advance of Artificial Intelligence (AI) technologies, all sorts of media (text, images, and audio) can be synthetically generated. More particularly, recent generative AI models trained on very large datasets can produce more plausible and realistic content. Distinguishing truth from falsity is becoming even more difficult, and the difference between reality and fiction is getting thinner daily. We are now facing the Era of Synthetic Realities. Disinformation exploits cognitive biases (e.g., anchoring bias, third-person effect, authority bias, bandwagon effect, to mention a few) <cit.>, which are systematic errors in judgment that humans can make, and because of this, synthetic realities represent a threat to our society. Synthetic realities are now generated by criminals and hostile agents for various malicious operations, including: political disinformation and state espionage (e.g., fake social network profiles), national security (e.g., facilitating a military coup), financial fraud (e.g., CEO scam impersonation), blackmail (e.g., ransomfake), defamation (e.g., revengeporn), plausible deniability of Forensic evidence. As human beings, because of cognitive biases, citizens will never stop falling for disinformation. A way to fight head on is to create tools to analyze digital content prior authentication by human experts. We anticipate some factors of utmost importance when developing new solutions. The first one involves exploring the context of a digital asset as much as possible, even with limited training data. The second one involves efforts on robustness and interpretability, as decisions must be intelligible to human beings. The final one consists in being conscious of the incompleteness of individual methods and orchestrating decision-making fusion methods to combine different telltales for final detection. * Limited training data. Data-driven approaches often rely upon large amounts of training data. This problem may become critical in the rapidly evolving scenario of fake information. New forms of falsification, unknown to the forensic analyst, are proposed daily, preventing the timely collection of all relevant training data. We discuss this issue by posing the problem as an open-set recognition problem <cit.>; that is, we need to define a suitable model for pristine data and analyze false information by looking for inconsistencies concerning this model. If possible, researchers also need to consider few-shot learning approaches that only require a tiny amount of labeled data to update to new threats. * Robustness and Interpretability. Being robust to wide-spectrum and unforeseen conditions is a basic system requirement, but it becomes central in a forensic environment featuring two active players. Robustness to adversarial attacks is paramount nowadays, especially when data-driven methods are applied, in light of the many literature findings that emphasize their vulnerability. Besides using basic solutions, we pose that reliability by relying on interpretable machine learning is necessary. We advocate for strategies that help us understand why a learning-based system behaves a certain way and provides the observed answers. Methods should also include semantics and context to support the entire decision-making process. * Fusion. Combining different methods toward a unified detection framework is very promising. As discussed in prior art for image forgery detection <cit.>, fusion in different learning stages (early, middle, or even late-stage) plays a fundamental role within dynamic and adversarial setups. We envision learning strategies combining different telltales as a promising way forward. Therefore, some driving research questions involve challenges in: * Detection: is it possible to detect plausible and realistic digital content (e.g., text generated by large-scale language models, synthetic images, and voices generated by generative models)? * Attribution: is it possible to accomplish source attribution by assigning manipulated digital content to a known type of attack vector? * Explainability: is it possible to automatically uncover cues or inconsistencies in digital content to corroborate falsity, as discussed above? * Context and fusion: how to incorporate context? How to combine different telltales? Many other challenges will play out in the coming years as synthetic realities become ever more realistic, directly affecting fundamental pillars of our society, such as democratic values, individual freedom, and social tolerance. In this paper, we strived to discuss some of these challenges and what lies ahead, but it was just the tip of the iceberg. Only an orchestrated effort of government representatives, society at large, and researchers will be able to curb such threats. We believe possible explorations might lie in regulatory acts, education investments, and scientific research for more powerful detection methods. § ACKNOWLEDGEMENT The Swiss, Brazilian and U.S. Governments are authorized to reproduce and distribute reprints for Governmental purposes, notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Swiss and Brazil governments or of DARPA, AFRL or the U.S. Government. § FINANCIAL SUPPORT This work was supported by the São Paulo Research Foundation (FAPESP) (J.P.C., grant numbers 2020/02211-2, 2017/12646-3), (J.Y., grant numbers 2019/04053-8, 2022/05002-0), (A.R., grant number 2017/12646-3); Blue Sky Research Fund of HKBU (R.W., grant number BSRF/21-22/16); Guangdong Basic and Applied Basic Research Foundation (R.W., grant number 2022A15151106 92); Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) (D.M., grant number FA8750-20-2-1004); and the Research Grant Council (RGC) of Hong Kong through Early Career Scheme (ECS) (H.L., grant number 21200522). Also, part of this work within the Idiap Research Institute received funding from the Swiss State Secretariat for Education, Research and Innovation (SERI), the Swiss State of Valais and the City of Martigny. IEEEtran Biographies 2pc João Phillipe Cardenuto received a degree in Bachelor in Computer Engineering and Computer Science at the University of Campinas (2019). Currently, he is pursuing a Ph.D. degree at the University of Campinas, Brazil. Due to his research work, he was honored with the Google Latin America Research Award (LARA) 2021. He also had the opportunity to collaborate with Google Research as an intern in 2022. His latest works have focused on image provenance analysis and the development of forensic algorithms aimed at detecting doctored scientific images. His research interests include media forensics, computer vision, machine learning, and scientific integrity. 2pc Jing Yang is a Ph.D. student of the Artificial Intelligence Lab., Recod.ai, and is currently doing a research internship at the Ubiquitous Knowledge Processing (UKP) Lab. She received her Master’s degree in Computer Science at Hunan University, China, and her Bachelor’s in Information and Computing Science at Hubei University of Technology, China. Jing’s research interests include natural language understanding, fact-checking, and forensics. 2pc Rafael Padilha is a researcher associated with the Artificial Intelligence Lab. (Recod.ai) at the Institute of Computing, University of Campinas, Brazil. He received his Ph.D. in 2022 from the same university, with a joint research internship at the University of Kentucky, USA. He works for Microsoft Research, innovating on agriculture and sustainability under the Research for Industry team. His research interests lie in computer vision, machine learning, and digital forensics. 2pc Renjie Wan received his BEng degree from the University of Electronic Science and Technology of China in 2012 and the Ph.D. degree from Nanyang Technological University, Singapore, in 2019. He is currently an Assistant Professor at Hong Kong Baptist University, Hong Kong. He is the outstanding reviewer of ICCV 2019 and the recipient of the Microsoft CRSF Award, VCIP 2020 Best Paper Award, and the Wallenberg-NTU Presidential Postdoctoral Fellowship. 2pc Daniel Moreira received a Ph.D. degree in computer science from the University of Campinas, Brazil, in 2016. After working four years as a systems analyst with the Brazilian Federal Data Processing Service (SERPRO), he joined the University of Notre Dame for six years, first as a post-doctoral fellow and later as an assistant research professor. He is currently an assistant professor in the Department of Computer Science at Loyola University Chicago. He is also a member of the IEEE Information Forensics and Security Technical Committee (IFS-TC), 2021-2023 term, IEEE Signal Processing Society Education Center Editorial Board, 2022-2023 term, and associate editor of IEEE Transactions on Information Forensics and Security (T-IFS) and Elsevier Pattern Recognition journals. His research interests include media forensics, machine learning, computer vision, and biometrics. 2pc Haoliang Li received his Ph.D. degree from Nanyang Technological University (NTU), Singapore in 2018. He is currently an assistant professor in Department of Electrical Engineering, City University of Hong Kong. His research mainly focuses on AI security, multimedia forensics and transfer learning. He received the Wallenberg-NTU presidential postdoc fellowship in 2019, doctoral innovation award in 2019, VCIP best paper award in 2020, Top 50 Chinese Young Scholars in AI+X 2022, and Stanford's top 2% most highly cited scientists in 2022. 2pc Shiqi Wang received the B.S. degree in computer science from the Harbin Institute of Technology in 2008 and the Ph.D. degree in computer application technology from Peking University in 2014. From 2014 to 2016, he was a Post-Doctoral Fellow with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada. From 2016 to 2017, he was a Research Fellow with the Rapid-Rich Object Search Laboratory, Nanyang Technological University, Singapore. He is currently an Assistant Professor with the Department of Computer Science, City University of Hong Kong. He has authored or coauthored more than 200 refereed journal articles/conference papers. His research interests include video compression, image/video quality assessment, and image/video search and analysis. He received the Best Paper Award from IEEE VCIP 2019, ICME 2019, IEEE Multimedia 2018, and PCM 2017. His coauthored article received the Best Student Paper Award in the IEEE ICIP 2018. 2pc Fernanda Andaló is a researcher associated with the Artificial Intelligence Lab. (Recod.ai) at the Institute of Computing, University of Campinas, Brazil. Andaló received a Ph.D. in Computer Science from the same university in 2012, during which she was a research fellow at Brown University. She worked for Samsung as a researcher and was a postdoctoral researcher in collaboration with Motorola, from 2014 to 2018. Currently, she works at The LEGO Group devising machine learning solutions for digital products. She was the 2016-2017 Chair of the IEEE Women in Engineering South Brazil Section, and is an elected member of the IEEE Information Forensics and Security Technical Committee. Her research interests include machine learning and computer vision. 2pc Sébastien Marcel (IEEE Senior member) is a senior researcher at the Idiap Research Institute (Switzerland), he heads the Biometrics Security and Privacy group and conducts research on face recognition, speaker recognition, vein recognition, attack detection (presentation attacks, morphing attacks, deepfakes) and template protection. He is also Professor at the University de Lausanne (UNIL) at the School of Criminal Justice and lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL). He received his Ph.D. degree in signal processing from Université de Rennes I in France (2000) at CNET, the research center of France Telecom (now Orange Labs). He is also the Director of the Swiss Center for Biometrics Research and Testing, which conducts certifications of biometric products. He is Associate Editor of IEEE Transactions on Biometrics and Identity Science. He was Associate Editor of IEEE Signal Processing Letters, Associate Editor of IEEE Transactions on Information Forensics and Security, a Guest Editor of the IEEE Transactions on Information Forensics and Security Special Issue on “Biometric Spoofing and Countermeasures”, and Co-editor of the IEEE Signal Processing Magazine Special Issue on “Biometric Security and Privacy”. He is also the lead Editor of the Springer Handbook of Biometrics Anti-Spoofing (Editions 1, 2 and 3). 2pc Anderson Rocha is a full-professor for Artificial Intelligence and Digital Forensics at the Institute of Computing, University of Campinas (Unicamp), Brazil. He is the Director of the Artificial Intelligence Lab., Recod.ai, and was the Director of the Institute of Computing for the 2019-2023 term. He is an elected affiliate member of the Brazilian Academy of Sciences (ABC) and the Brazilian Academy of Forensic Sciences (ABC). He is a two-term elected member of the IEEE Information Forensics and Security Technical Committee (IFS-TC) and its chair for the 2019-2020 term. He is a Microsoft Research and a Google Research Faculty Fellow. In addition, in 2016, he has been awarded the Tan Chin Tuan (TCT) Fellowship, a recognition promoted by the Tan Chin Tuan Foundation in Singapore. Finally, he is ranked Top-2% among the most influential scientists worldwide, according to recent studies from Research.com and Standford/PlosOne.
http://arxiv.org/abs/2306.01952v1
20230602232641
Online Control with Adversarial Disturbance for Continuous-time Linear Systems
[ "Jingwei Li", "Jing Dong", "Baoxiang Wang", "Jingzhao Zhang" ]
math.OC
[ "math.OC", "cs.SY", "eess.SY" ]
Controlled excitation of rotons in superfluid helium with an optical centrifuge Alexander A. Milner and Valery Milner July 31, 2023 =============================================================================== We study online control for continuous-time linear systems with finite sampling rates, where the objective is to design an online procedure that learns under non-stochastic noise and performs comparably to a fixed optimal linear controller. We present a novel two-level online algorithm, by integrating a higher-level learning strategy and a lower-level feedback control strategy. This method offers a practical and robust solution for online control, which achieves sublinear regret. Our work provides one of the first nonasymptotic results for controlling continuous-time linear systems a with finite number of interactions with the system. § INTRODUCTION A major challenge in robotics is to deploy simulated controllers into real-world. This process, known as sim-to-real transfer, can be difficult due to misspecified dynamics, unanticipated real-world perturbations, and non-stationary environments. Various strategies have been proposed to address these issues, including domain randomization, meta-learning, and domain adaptation <cit.>. In this work, we provide an analysis of the sim-to-real transfer problem from an online control perspective. Online control focuses on iteratively updating the controller after deployment (i.e., online) based on collected trajectories. Significant progress has been made in this field by applying insights from online learning to linear control problems <cit.>. Following this line of work, we approach the sim-to-real transfer issue for continuous-time linear systems as a non-stochastic control problem, as explored in previous works <cit.>. These studies provide regret bounds for an online controller that lacks prior knowledge of system perturbations. However, a gap remains as no previous analysis has specifically investigated continuous-time systems, but real world systems often evolve continuously in time. Existing literature on online continuous control is limited <cit.>. Most continuous control research emphasizes the development of model-free algorithms, such as policy iteration, under the assumption of noise absence. Recently, <cit.> examined online continuous-time linear quadratic control in the presence of standard Brownian noise that may not always hold true in real-world applications. This leads us to the crucial question: Is it possible to design an online non-stochastic control algorithm in a continuous-time setting that achieves sublinear regret? Our work addresses this question by proposing a two-level online controller. The higher-level controller symbolizes the policy learning process and updates the policy at a low frequency to minimize regret. Conversely, the lower-level controller delivers high-frequency feedback control input to reduce discretization error. Our proposed algorithm results in regret bounds for continuous-time linear control in the face of non-stochastic disturbances. More importantly, our analyses suggest that online learning algorithms, with potentially nontrivial adaptations, could also benefit continuous-time control problems. We believe this direction holds promising potential for further exploration. § RELATED WORKS The control theory of linear dynamical systems under the disturbance has been thoroughly examined in various contexts, such as the linear quadratic stochastic control <cit.>, robust control <cit.>, system identification <cit.>. However, most of these problems are investigated in non-robust settings, with robust control being the sole exception where adversarial perturbations in the dynamic are permitted. In this scenario, the controller solves for the optimal linear controller in the presence of worst-case noise. Nonetheless, the algorithms designed in this context can be overly conservative as they optimize over the worst-case noise, a scenario that is rare in real-world applications. We will elaborate on the difference between robust control and online non-stochastic control in Section 3. Online control There has been a recent surge of interest in online control, as demonstrated by studies such as <cit.>. In online control, the player interacts with the environment and updates the policy in each round aiming to achieve sublinear regret. In scenarios with stochastic Gaussian noise, <cit.> has provided the first efficient algorithm with an O(√(T)) regret bound. However, in real-world applications, the assumption of Gaussian distribution is often unfulfilled. <cit.> pioneered research on non-stochastic online control, where the noises can be adversarial. Under general convex cost, they introduced the Disturbance-Action Policy Class. Using an online convex optimization (OCO) algorithm with memory, they achieved an O(√(T)) regret bound. Subsequent studies extended this approach to other scenarios, such as quadratic cost <cit.>, partial observations <cit.> or unknown dynamical systems <cit.>, yielding varying theoretical guarantees like online competitive ratio <cit.>. Online Continuous Control Compared to online control, there has been relatively little research on model-based continuous-time control. Most continuous control literature has focused on developing model-free algorithms such as policy iteration (e.g. <cit.>), typically assume zero-noise. This is because analyzing the system when transition dynamics are represented by differential equations, rather than recurrence formulas, poses a significant challenge. Recently, <cit.> studied online continuous-time linear quadratic control with standard Brownian noise and unknown system dynamics. They proposed an algorithm based on the least-square method, which estimates the system's coefficients and solves the corresponding Riccati equation. However, it should be noted that standard Brownian noise can be quite stringent and may fail in real-world applications. § PROBLEM SETTING In this paper, we consider the online non-stochastic control for continuous-time linear systems. Therefore, we provide a brief overview below and define our notations. §.§ Continuous-time Linear Systems The Linear Dynamical System can be considered a specific case of a continuous Markov decision process with linear transition dynamics. The state transitions are governed by the following equation: ẋ_t=A x_t+B u_t+w_t , where x_t is the state at time t, u_t is the action taken by the controller at time t, and w_t represents the disturbance at time t. We assume access to ẋ_t at each time step, which is dependent on the state, action, and disturbance at time t. We do not make any strong assumptions about the distribution of w_t, and we also assume that the distribution of w_t is unknown to the learner beforehand. This implies that the disturbance sequence w_t can be selected adversarially. When the action u_t is applied to the state x_t, a cost c_t(x_t, u_t) is incurred. Here, we assume that the cost function c_t is convex. However, this cost is not known in advance and is only revealed after the action u_t is implemented at time t. In the system described above, an online policy π is defined as a function that maps known states to actions, i.e., u_t = π({x_ξ | ξ∈ [0,t]}). Our goal, then, is to design an algorithm that determines such an online policy to minimize the cumulative cost incurred. Specifically, for any algorithm 𝒜, the cost incurred over a time horizon T is: J_T(𝒜)=∫_0^T c_t(x_t, u_t) dt . In scenarios where the policy is linear (i.e., a linear controller) π(K), such that u_t=-K x_t, we use J(K) to denote the cost of a policy from a certain class K ∈𝒦. §.§ Difference between Robust and Online Non-stochastic Control While both robust and online non-stochastic control models incorporate adversarial noise, it's crucial to understand that their objectives differ significantly. The objective function for robust control, as seen in <cit.>, is defined as: min_u_1max_w_1: Tmin_u_2…min_u_tmax_w_T J_T(𝒜) , Meanwhile, the objective function for online non-stochastic control, as discussed in <cit.>, is: min_𝒜max_w_1: T (J_T(𝒜)-min_K ∈𝒦 J_T(K)) . Note that the robust control approach seeks to directly minimize the cost function, while online non-stochastic control targets the minimization of regret, which is the discrepancy between the actual cost and the cost associated with a baseline policy. Additionally, in robust control, the noise at each step can depend on the preceding policy, whereas in online non-stochastic control, all the noise is predetermined (though unknown to the player). §.§ Assumptions Throughout this paper, we operate under the following assumptions, starting with the initial condition x_0=0. We denote · as the L_2 norm of the vector and matrix. Firstly, we make assumptions concerning the system dynamics and noise: The matrices that govern the dynamics are bounded, meaning A≤κ_A and B≤κ_B, where κ_A and κ_B are constants. Moreover, the perturbation and its derivative are both continuous and bounded: w_t , ẇ_t≤ W, with W being a constant. These assumptions ensure that we can bound the states and actions, as well as their first and second-order derivatives. Next, we make assumptions regarding the cost function: The costs c_t(x, u) are convex. Additionally, if there exists a constant D such that x,u≤ D, then |c_t(x, u)| ≤β D^2,∇_x c_t(x, u),∇_u c_t(x, u)≤ G D. This assumption implies that if the differences between states and actions are small, then the error in their cost will also be relatively small. Finally, we describe our baseline policy class: A linear policy K is (κ, γ)-strongly stable if, for any h > 0 that is sufficiently small, there exist matrices L_h, P such that I + h(A-B K)= P L_h P^-1, with the following two conditions: * The norm of L_h is strictly smaller than unity and dependent on h, i.e., L_h≤ 1-hγ. * The controller and transforming matrices are bounded, i.e., K≤κ and P,P^-1≤κ. This definition ensures the system can be stabilized by a linear controller K. §.§ Regret Formulation To evaluate the designed algorithm, we use regret, which is defined as the cumulative difference between the cost incurred by the policy of our algorithm and the cost incurred by the best policy in hindsight. Let 𝒦 denotes the class of strongly stable linear policy, i.e. 𝒦={K: K is (κ, γ)-strongly stable}. Then, for an algorithm 𝒜, the regret is defined as follows. Regret(𝒜) = J_T(𝒜)-min _K ∈𝒦 J_T(K) . § ALGORITHM DESIGN In this section, we outline the design of our algorithm and discuss the technical challenges encountered in deriving our main theorem. * First, we discretize the total time period T into smaller intervals of length h. We use the information at each point x_h, x_2h, … and u_h, u_2h, … to approximate the actual cost of each time interval, leveraging the continuity assumption. This process does introduce some discretization errors. * Next, we employ the Disturbance-Action policy (DAC) <cit.>. This policy selects the action based on the current time step and the estimations of disturbances from several past steps. This policy can approximate the optimal linear policy in hindsight when we choose suitable parameters. However, the optimal policy K^∗ is unknown, so we cannot directly acquire the optimal choice. To overcome this, we employ the OCO with memory framework <cit.> to iteratively adjust the DAC policy parameter M_t to approximate the optimal solution M^*. * After that, we introduce the concept of the ideal state y_t and ideal action v_t that approximate the actual state x_t and action u_t. Note that both the state and policy depend on all DAC policy parameters M_1, M_2, …, M_t. Yet, the OCO with memory framework only considers the previous H steps. Therefore, we need to consider ideal state and action. y_t and v_t represent the state the system would reached if it had followed the DAC policy {M_t-H, …, M_t} at all time steps from t-H to t, under the assumption that the state x_t-H was 0. From all the analysis above, we can decompose the regret as four parts: the discretization error, the approximation error of the DAC policy compared to the optimal policy, the regret of the OCO with memory, and the approximation error between the ideal cost and the actual cost. §.§ New Challenges in Online Continuous Control In transitioning online control to continuous systems, we cannot directly apply the methods from <cit.> to our work. We must overcome several challenges: Challenge 1. Unbounded States In a discrete-time system, it is straightforward to demonstrate that the state sequence x_t is bounded using the DAC policy. This can be easily shown by applying the dynamics inequality x_t+1≤ a x_t + b (where a < 1) and the induction method presented in <cit.>. However, for a continuous-time system, a different approach is necessary because we only have the differential equation instead of the state recurrence formula. One naive approach is to use the Taylor expansion of each state to derive the recurrence formula of the state. However, this argument requires the prerequisite knowledge that the states within this neighborhood are bounded by the dynamics, leading to circular reasoning. To overcome this challenge, we employ Gronwall's inequality to bound the first and second-order derivatives in the neighborhood of the current state. We then use these bounded properties, in conjunction with an estimation of previous noise, to bound the distance to the next state. Through an iterative application of this method, we can argue that all states and actions are bounded. Challenge 2. The Ideal Cost Approximation and Discretization Error Trade-off We propose a new definition of a strongly stable policy for continuous-time systems, as the definition used for discrete systems is not immediately applicable. In Definition <ref>, we describe a strongly stable policy K that is dependent on a discretization parameter h. It is crucial to understand that the selection of h influences the convergence rate of the ideal cost approximation error, as measured by |c_t(x_t, u_t) - c_t(y_t, v_t)|, resulting in a rate of O(T(1-hγ)^H). On the one hand, choosing an overly small value for h may lead to a slow convergence rate. On the other hand, an excessively large h may cause the discretization error to become unmanageable, being of the order O(hT). Hence, the design of an appropriate discretization parameter presents a significant challenge. Challenge 3. The Curse of Dimensionality Caused by Discretization In discrete-time systems, where the number of states is predetermined, the parameters for the DAC policy and the OCO memory buffer can be selected with relative ease <cit.>. However, in continuous-time systems, the number of states can be inversely proportional to the discretization parameter h, which also determines the size of the OCO memory buffer. Thus, if we set the OCO memory buffer size as H = O(log(T) / h) to attain a sublinear ideal cost approximation error |c_t(x_t, u_t) - c_t(y_t, v_t)|, the associated regret of OCO with memory will be O(√(T) / h^2.5). This regret could become excessively large if h is small enough to allow for minimal discretization error. §.§ Main Algorithm In the subsequent discussion, we use shorthand notation to denote the cost, state, control, and disturbance variables c_ih, x_ih, u_ih, and w_ih as c_i, x_i, u_i, and w_i, respectively. We now introduce our algorithm, which is built upon a two-level controller update approach. Our algorithm employs two controllers, working in concert to enhance the performance of the policy. The higher-level controller implements the OCO with memory to sporadically update the policy, while the lower-level controller offers high-frequency control input to minimize discretization error. Our lower-level controller utilizes the DAC policy. To formally define the DAC policy for continuous systems, we start by dividing the interval into multiple sub-intervals. This division ensures that the current state is influenced only by all disturbance that occur within each time interval, rather than exclusively at the moment of disturbance. Following this property, we introduce our definition of the DAC policy in the continuous system: The Disturbance-Action Policy Class(DAC) is defined as: u_t=-K x_t+∑_i=1^l M^i_t ŵ_t-i , where K is a fixed strongly stable policy, l is a parameter that signifies the dimension of the policy class, M_t = {M^1_t, … , M^l_t} is the weighting parameter of the disturbance at step t, and ŵ_t is the estimated disturbance: ŵ_t = x_t+1 - x_t - h(Ax_t + Bu_t)/h . We note that this estimation of disturbance is readily implementable as it only requires information from the previous state. Furthermore, it counteracts the second-order residue term of the Taylor expansion of x_t, which greatly simplifies the analysis of state evolution. Our higher-level controller adopts the OCO with memory framework. A technical challenge lies in balancing the approximation error and OCO regret. To achieve a low approximation error, we desire the policy update interval H to be inversely proportional to the sampling distance h. However, this relationship lead to large OCO regret. To mitigate this issue, we introduce a new parameter m = Θ(1/h), representing the lookahead window. We update the parameter M_t only once every m iteration, further reducing the OCO regret without negatively impacting the approximation error: M_t+1 = { Π_ℳ(M_t-η∇ g_t(M)) if t mod m == 0 , M_t otherwise . . For notational convenience and to avoid redundancy, we denote M_[t/m] = M_t. We can then define the ideal state and action. Due to the properties of the OCO with memory structure, we need to consider only the previous Hm states and actions, rather than all states. As a result, we introduce the definition of the ideal state and action. During the interval t ∈ [im, (i+1)m - 1], the learning policy remains unchanged, so we could define the ideal state and action in the following: The ideal state y_t and action v_t at time t∈ [im, (i+1)m - 1] are defined as y_t = x_t(M_i-H, ..., M_i), v_t=-K y_t+∑_j=1^l M_i^j w_t-i . where the notation indicates that we assume the state x_t-H is 0 and that we apply the DAC policy (M_i-H, …, M_i) at all time steps from t-Hm to t. We can also define the ideal cost in this interval: The ideal cost function during the interval t ∈ [im, (i+1)m - 1] is defined as follows: f_i(M_i-H, …, M_i)=∑_t = im^(i+1)m - 1c_t(y_t(M_i-H, …, M_i), v_t(M_i-H, …, M_i)) . With all the concepts presented above, we are now prepared to introduce our algorithm: § MAIN RESULT In this section, we present the primary theorem of online continuous control regret analysis: theoremmain Under Assumption <ref>, <ref>, a step size of η = Θ(√(m/Th)), and a DAC policy update frequency m = Θ(1/h), Algorithm <ref> attains a regret bound of J_T(𝒜)-min _K ∈𝒦 J_T(K) ≤ O(n(1-hγ)^H/h) + O(√(nh)) + O(Th) . With the sampling distance h = Θ(1/√(T)), and the OCO policy update parameter H = Θ(log(T)), Algorithm <ref> achieves a regret bound of J_T(𝒜)-min _K ∈𝒦 J_T(K) ≤ O(√(T)log(T)) . Theorem <ref> demonstrates a regret that matches the regret of a discrete system <cit.>. Despite the analysis of a continuous system differing from that of a discrete system, we can balance discretization error, approximation error, and OCO with memory regret by selecting an appropriate update frequency for the policy. Here, O(·) and Θ(·) are abbreviations for the polynomial factors of universal constants in the assumption. While we defer the detailed proof to the appendix, we outline the key ideas and highlight them below. Proof Sketch We denote x^∗_t, u^∗_t = K^∗ x^∗_t as the optimal state and action following the policy specified by K^∗ respectively, where K^∗ = max_K ∈𝒦 J_T(K). We use the shorthand c_ih, x_ih, u_ih, and w_ih for c_i, x_i, u_i, and w_i, respectively. Initially, we need to prove Lemma <ref>, confirm that the state x_t and action u_t are bounded by some constant D when using either the DAC policy or the optimal policy. We then discretize and decompose the regret as follows: J_T(𝒜)-min_K ∈𝒦 J_T(K) =∫_0^T c_t(x_t, u_t) dt - ∫_0^T c_t(x^*_t, u^*_t) dt = ∑_i=0^n-1∫_ih^(i+1)h c_t(x_t, u_t) dt - ∑_i=0^n-1∫_ih^(i+1)h c_t(x^∗_t, u^∗_t) dt = h (∑_i=0^n-1 c_i(x_i, u_i) - ∑_i=0^n-1 c_i(x^*_i, u^*_i)) + R_0 , where R_0 represents the discretization error: R_0 = ∑_i=0^n-1∫_ih^(i+1)h (c_t(x_t, u_t) - c_t(x^∗_t, u^∗_t)) dt - h ∑_i=0^n-1(c_i(x_i, u_i) - c_i(x^*_i, u^*_i)) . By the discussion in section 4, the first term can be further decomposed as ∑_i=0^n-1 c_i(x_i, u_i) - ∑_i=0^n-1 c_i(x^*_i, u^*_i) = R_1 + R_2 + R_3 , where R_1 = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - f_i(M_i-H, … , M_i)) , R_2 = ∑_i=0^p - 1 f_i(M_i-H, … , M_i) - min_M∈ℳ∑_i=0^p - 1 f_i(M, … , M) , R_3 = min_M ∈ℳ∑_i=0^p - 1 f_i(M, … , M)- ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) . Thus, we have the regret decomposition as Regret(T) = h(R_1 + R_2 + R_3) + R_0. We then separately upper bound each of the four terms. The term R_0 represents the error caused by discretization, which decreases as the number of sampling points increases and the sampling distance h decreases. This is because more sampling points make our approximation of the continuous system more accurate. Using Lemma <ref>, we get the following upper bound: R_0 ≤ O(hT). The term R_1 represents the difference between the actual cost and the approximate cost. For a fixed h, this error decreases as the number of sample points looked ahead m increases, while it increases as the sampling distance h decreases. This is because the closer adjacent points are, the slower the convergence after approximation. By Lemma <ref> we can bound it as R_1 ≤ O(n(1-hγ)^Hm). The term R_2 is incurred due to the regret of the OCO with memory algorithm. Note that this term is determined by learning rate η and the policy update frequency m. Choosing suitable parameters and using Lemma <ref>, we can obtain the following upper bound: R_2 ≤ O(√(n/h)). The term R_3 represents the difference between the ideal optimal cost and the actual optimal cost. Since the accuracy of the DAC policy approximation of the optimal policy depends on its degree of freedom l, a higher degree of freedom leads to a more accurate approximation of the optimal policy. We use Lemma <ref> and choose l = Hm to bound this error: R_3 ≤ O(n(1-hγ)^Hm). By summing up these four terms and taking m = Θ(1/h), we get: Regret(T) ≤ O(nh(1-hγ)^H/h) + O(√(nh)) + O(hT) . Finally, we choose h = Θ(1/√(T)), m = Θ(1/h), H = Θ(log(T)), the regret is bounded by Regret(T) ≤ O(√(T)log(T)) . § KEY LEMMAS In this section, we will primarily discuss the rationale behind the proof of our key lemmas. Due to space limitations, detailed proofs of these lemmas are provided in the appendix. Bounding the States and Actions First, we need to prove all the states and actions are bounded. lemmadif Under Assumption <ref> and <ref>, choosing arbitrary h in the interval [0, h_0] where h_0 is a constant only depends on the parameters in the assumption, we have for any t and policy M_i, x_t, y_t, u_t, v_t≤ D. x_t - y_t, u_t - v_t≤κ^2(1+κ)(1-hγ)^Hm+1 D. In particular, taking all the M_t = 0 and K = K^*, we obtain the actual optimal solution x^*_t, u^*_t≤ D. The proof of this Lemma mainly use the Gronwall inequality and the induction method. Then we analyze the discretization error of the system. Bounding the Discretization Error Analyzing a continuous system can be arduous; hence, we employ discretization with distance h to facilitate the analysis. lemmadis Under Assumption <ref>, Algorithm <ref> attains the following bound of R_0: R_0 = ∑_i=0^n-1∫_ih^(i+1)h (c_t(x_t, u_t) - c_t(x^∗_t, u^∗_t)) dt - h ∑_i=0^n-1(c_i(x_i, u_i) - c_i(x^*_i, u^*_i)) ≤ 2GDhT . This lemma indicates that the discretization error is directly proportional to the sample distance h. In other words, increasing the number of sampling points leads to more accurate estimation of system. Based on Lemma <ref>, we know that x_t,u_t≤ D. By utilizing assumption <ref>, we can deduce that: |c_t(x_t, u_t) - c_ih(x_ih, u_ih)| ≤max_x,u(∇_x c_t(x, u)+∇_u c_t(x, u))(t-ih) ≤ G D (t-ih) . Summing up all these terms, we obtain the bound for the discretization error. Bounding the Difference between Ideal Cost and Actual Cost The following lemma describes the upper bound of the error by approximating the ideal state and action: lemmaapp Under Assumption <ref> and <ref>, Algorithm <ref> attains the following bound of R_1: R_1 = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - f_i(M_i-H, … , M_i)) ≤ nG D^2 κ^2(1+κ)(1-hγ)^Hm+1 . From this lemma, it is evident that for a fixed sample distance h, the error diminishes as the number of sample points looked ahead m increases. However, as the sampling distance h decreases, the convergence rate of this term becomes slower. Therefore, it is not possible to select an arbitrarily small value for h in order to minimize the discretization error R_0. We need to demonstrate that the discrepancy between x_t and y_t, as well as u_t and v_t, is sufficiently small, given assumption <ref>. This can be proven by analyzing the state evolution under the DAC policy. By utilizing Assumption <ref> and Lemma <ref>, we can deduce the following inequality: |c_t(x_t, u_t)-c_t(y_t, v_t)| ≤ |c_t(x_t, u_t)-c_t(y_t, u_t)| + |c_t(y_t, u_t)-c_t(y_t, v_t)| ≤ GD x_t - y_t + GD u_t - v_t . Summing over all the terms and use Lemma <ref>, we can derive an upper bound for R_1. Next, we analyze the regret of Online Convex Optimization (OCO) with a memory term. Bounding the Regret of OCO with Memory To analyze OCO with a memory term, we provide an overview of the framework established by <cit.> in online convex optimization. The framework considers a scenario where, at each time step t, an online player selects a point x_t from a set 𝒦⊂ℝ^d. At each time step, a loss function f_t: 𝒦^H+1→ℝ is revealed, and the player incurs a loss of f_t(x_t-H, …, x_t). The objective is to minimize the policy regret, which is defined as PolicyRegret =∑_t=H^T f_t(x_t-H, …, x_t)-min _x ∈𝒦∑_t=H^T f_t(x, …, x) . In this setup, the first term corresponds to the DAC policy we choose, while the second term is used to approximate the optimal strongly stable linear policy. lemmaoco Under Assumption <ref> and <ref>, choosing m = C/h and η = Θ(m/Th), Algorithm <ref> attains the following bound of R_2: R_2 = ∑_i=0^p - 1 f_i(M_i-H, … , M_i) - min_M ∈ℳ∑_i=0^p - 1 f_i(M, … , M) ≤4a/γ√(GDC^2 κ^2(κ+1) W_0 κ_B/γ(GDC κ^2(κ+1) W_0 κ_B/γ+ C^2 κ^3 κ_B W_0 H^2) n/h) . To analyze this term, we can transform the problem into an online convex optimization with memory and utilize existing results presented by <cit.> for it. By applying their results, we can derive the following bound: ∑_t=H^T f_t(x_t-H, …, x_t)-min _x ∈𝒦∑_t=H^T f_t(x, …, x) ≤ O(D √(G_f(G_f+L H^2) T)) . Taking into account the bounds on the diameter, Lipschitz constant, and the gradient, we can ultimately derive an upper bound for R_2. Bounding the Approximation Error of DAC Policy Lastly, we aim to establish a bound on the approximation error between the optimal DAC policy and the unknown optimal linear policy. lemmaopt Under Assumption <ref> and <ref>, Algorithm <ref> attains the following bound of R_3: R_3 = min_M ∈ℳ∑_i=0^p - 1 f_i(M, ... , M)- ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) ≤ 3n(1-hγ)^HmGDW_0κ^3a(lh κ_B + 1) . The intuition behind this lemma is that the evolution of states leads to an approximation of the optimal linear policy in hindsight, where u_t^* = -K^* x_t if we choose M^* = {M^i}, where M^i = (K- K^*)(I + h(A-B K^*))^i. Although the optimal policy K^* is unknown, such an upper bound is attainable because the left-hand side represents the minimum of M ∈ℳ. § CONCLUSIONS AND FUTURE DIRECTIONS In this paper, we propose a two-level online controller to achieve sublinear regret in online continuous-time control of linear system with adversarial disturbances. The higher-level controller updates the policy using the Online Convex Optimization (OCO) with memory framework at a low frequency to reduce regret, while the lower-level controller employs the DAC policy to approximate the actual state with an idealized setting. Through our analysis, we observe that the regret primarily depends on the time T and the sampling distance h. By selecting suitable sampling distance, we are able to achieve sublinear regret of T. There are several potential directions for future research in online non-stochastic control of continuous-time systems. Firstly, this paper focuses on solving the problem when the dynamics are known. It would be valuable to extend this work to address the case of unknown dynamics, where a trade-off between system identification and regret minimization exists. Secondly, while we assume convexity of the cost function in this paper, it would be interesting to explore whether assuming strong convexity can lead to even smaller regret. Finally, it would be intriguing to shift the focus from regret to the competitive ratio in this setup, as it presents a different perspective on performance evaluation. In the appendix we define n as the smallest integer greater than or equal to T/h, and we use the shorthand c_ih, x_ih, u_ih, and w_ih as c_i, x_i, u_i, and w_i, respectively. § PROOF OF THEOREM <REF> * We denote u^∗_t = K^∗ x^∗_t as the optimal state and action that follows the policy specified by K^∗, where K^∗ = max_K ∈𝒦 J_T(K). We then discretize and decompose the regret as follows: J_T(𝒜)-min_K ∈𝒦 J_T(K) =∫_0^T c_t(x_t, u_t) dt - ∫_0^T c_t(x^*_t, u^*_t) dt = ∑_i=0^n-1∫_ih^(i+1)h c_t(x_t, u_t) dt - ∑_i=0^n-1∫_ih^(i+1)h c_t(x^∗_t, u^∗_t) dt = h (∑_i=0^n-1 c_i(x_i, u_i) - ∑_i=0^n-1 c_i(x^*_i, u^*_i)) + R_0 , where R_0 represents the discretization error. We define p as the smallest integer greater than or equal to n/m, then the first term can be further decomposed as ∑_i=0^n-1 c_i(x_i, u_i) - ∑_i=0^n-1 c_i(x^*_i, u^*_i) = ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - ∑_j=im^(i+1)m - 1 c_i(y_i, v_i)) + ∑_i=0^p - 1∑_j=im^(i+1)m - 1c_i(y_i, v_i) - ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - f_i(M_i-H, … , M_i)) + ∑_i=0^p - 1 f_i(M_i-H, … , M_i) - min_M ∈ℳ∑_i=0^p - 1 f_i(M, … , M) + min_M ∈ℳ∑_i=0^p - 1 f_i(M, … , M)- ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) , where the last equality is by the definition of the idealized cost function (Definition <ref>). Let us denote R_1 = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - f_i(M_i-H, … , M_i)) , R_2 = ∑_i=0^p - 1 f_i(M_i-H, … , M_i) - min_M ∈ℳ∑_i=0^p - 1 f_i(M, … , M) , R_3 = min_M ∈ℳ∑_i=0^p - 1 f_i(M, … , M)- ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) . Then we have the regret decomposition as Regret(T) = h(R_1 + R_2 + R_3) + O(hT) . We then separately upper bound each of the four terms. The term R_0 represents the error caused by discretization, which decreases as the number of sampling points increases and the sampling distance h decreases. This is because more sampling points make our approximation of the continuous system more accurate. Using Lemma <ref>, we get the following upper bound: R_0 ≤ O(hT). The term R_1 represents the difference between the actual cost and the approximate cost. For a fixed h, this error decreases as the number of sample points looked ahead m increases, while it increases as the sampling distance h decreases. This is because the closer adjacent points are, the slower the convergence after approximation. By Lemma <ref> we can bound it as R_1 ≤ O(n(1-hγ)^Hm). The term R_2 is incurred due to the regret of the OCO with memory algorithm. Note that this term is determined by learning rate η and the policy update frequency m. Choosing suitable parameters and using Lemma <ref>, we can obtain the following upper bound: R_2 ≤ O(√(n/h)). The term R_3 represents the difference between the ideal optimal cost and the actual optimal cost. Since the accuracy of the DAC policy approximation of the optimal policy depends on its degree of freedom l, a higher degree of freedom leads to a more accurate approximation of the optimal policy. We use Lemma <ref> and choose l = Hm to bound this error: R_3 ≤ O(n(1-hγ)^Hm). By summing up these four terms and taking m = Θ(1/h), we get: Regret(T) ≤ O(nh(1-hγ)^H/h) + O(√(nh)) + O(hT) . Finally, we choose h = Θ(1/√(T)), m = Θ(1/h), H = Θ(log(T)), the regret is bounded by Regret(T) ≤ O(√(T)log(T)) . § THE EVOLUTION OF THE STATE In this section we will prove that using the DAC policy, the states and actions are uniformly bounded. The difference between ideal and actual states and the difference between ideal and actual action is very small. We begin with expressions of the state evolution using DAC policy: We have the evolution of the state and action: x_t+1 = Q^l+1_h x_t-l + h∑_i=0^2lΨ_t, iŵ_t-i , y_t+1 = h∑_i=0^2HmΨ_t, iŵ_t-i , v_t = -K y_t+h∑_j=1^Hm M_t^jŵ_t-j . where Ψ_t, i represent the coefficients of ŵ_t-i: Ψ_t, i=Q^i_h 1_i ≤ l+h∑_j=0^l Q^j_h B M_t-j^i-j1_i-j ∈[1, l] . Define Q_h= I + h(A-B K). Using the Taylor expansion of x_t and denoting r_t as the second-order residue term, we have x_t+1 = x_t + h ẋ_t + h^2 r_t = x_t + h(Ax_t + Bu_t + w_t) + h^2 r_t . Then we calculate the difference between w_i and ŵ_i: ŵ_t - w_t = x_t+1 - x_t - h(Ax_t + Bu_t + w_t)/h = h r_t . Using the definition of DAC policy and the difference between disturbance, we have x_t+1 = x_t + h(Ax_t + B(-K x_t+h∑_i=1^l M_t^i ŵ_t-i) + ŵ_t - hr_t) + h^2 r_t = (I + h(A - BK)) x_t + h(B h∑_i=1^l M_t^i ŵ_t-i + ŵ_t) = Q_h x_t + h(Bh∑_i=1^l M_t^i ŵ_t-i + ŵ_̂t̂) = Q^2_h x_t-1 + h(Q_h(Bh∑_i=1^l M_t-1^i ŵ_t-1-i + ŵ_t-1)) +h(Bh∑_i=1^l M_t^i ŵ_t-i + ŵ_̂t̂) = Q^l+1_h x_t-l + h∑_i=0^2lΨ_t, iŵ_t-i , where the last equality is by recursion and Ψ_t, i represent the coefficients of ŵ_t-i. Then we calculate the coefficients of w_t-i and get the following result: Ψ_t, i=Q^i_h 1_i ≤ l+h∑_j=0^l Q^j_h B M_t-j^i-j1_i-j ∈[1, l] . By the ideal definition of y_t+1 and v_t(only consider the effect of the past Hm steps while planning, assume x_t-Hm = 0), taking l = Hm we have y_t+1 = h∑_i=0^2HmΨ_t, iŵ_t-i, v_t = -K y_t+h∑_j=1^Hm M_t^jŵ_t-j . * Then we prove the norm of the transition matrix is bounded. We have the following bound of the transition matrix: Ψ_t, i≤ a(lh κ_B + 1) κ^2(1-hγ)^i-1 . By the definition of strongly stable policy, we know Q^i_h = (P L_h P^-1)^i = P (L_h)^i P^-1≤PL_h^i P^-1≤ aκ^2 (1-hγ)^i . By the definition of Ψ_t, i, we have Ψ_t, i = Q^i_h 1_i ≤ l+h∑_j=0^l Q^j_h B M_t-j^i-j1_i-j ∈[1, l] ≤κ^2(1-hγ)^i +ah∑_j=1^lκ_B κ^2 (1-hγ)^j (1-hγ)^i-j-1 ≤κ^2(1-hγ)^i +alh κ_B κ^2 (1-hγ)^i-1≤ a(lh κ_B + 1) κ^2(1-hγ)^i-1 , where the first inequality is due to equation <ref>, assumption <ref> and the condition of M_t^i≤ a(1-hγ)^i-1. After that, we can uniformly bound the state x_t and its first and second-order derivative. For any t ∈ [0,T], choosing arbitrary h in the interval [0, h_0] where h_0 is a constant only depends on the parameters in the assumption, we have x_t≤ D_1, ẋ_t≤ D_2, ẍ_t≤ D_3 and the estimatation of disturbance is bounded by ŵ_t≤ W_0. Moreover, D_1, D_2, D_3 are only depend on the parameters in the assumption. We prove this lemma by induction. When t = 0, it is clear that x_0 satisfies this condition. Suppose x_t ≤ D_1, ẋ_t ≤ D_2, ẍ_t ≤ D_3, ŵ_t ≤ W_0 for any t ≤ t_0, where t_0 = kh is the k-th discretization point. Then for t ∈ [t_0, t_0 + h], we first prove that ẋ_t ≤ D_2, ẍ_t ≤ D_3. By Assumption <ref> and our definition of u_t, we know that for any t ∈ [t_0, t_0 + h]. Thus, we have ẋ_t = Ax_t + Bu_t + w_t = Ax_t + B(-Kx_t_0 + h ∑_i=1^l M^i_kŵ_k-i ) + w_t ≤κ_A x_t + κ_B κx_t_0 + h ∑_i=1^l (1-hγ)^i-1 W_0 + W ≤κ_A x_t + κ_B κ D_1 + W_0/γ + W , where the first inequality is by the induction hypothesis ŵ_t ≤ W_0 for any t ≤ t_0 and M^i_k ≤ (1-h γ)^i-1, the second inequality is by the induction hypothesis x_t ≤ D_1 for any t ≤ t_0. For any t ∈ [t_0, t_0 + h], because we choose the fixed policy u_t ≡ u_t_0, so we have u̇_t = 0 and ẍ_t = A ẋ_t + B u̇_t + w_t = A ẋ_t + w_t ≤κ_A ẋ_t + W . By the Newton-Leibniz formula, we have for any ζ∈ [0,h], ẋ_t_0 + ζ - ẋ_t_0 = ∫_0^ζẍ_t_0 + ξ d_ξ . Then we have ẋ_t_0 + ζ ≤ẋ_t_0 + ∫_0^ζẍ_t_0 + ξ d_ξ ≤ẋ_t_0 + ∫_0^ζ (κ_Aẋ_t_0 + ξ + W) d_ξ = ẋ_t_0+ Wζ + κ_A ∫_0^ζẋ_t_0 + ξ d_ξ . By Gronwall inequality, we have ẋ_t_0 + ζ ≤ẋ_t_0+ Wζ + ∫_0^ζ (ẋ_t_0+W ξ)exp(κ_A(ζ - ξ)) d_ξ . Then we have ẋ_t_0 + ζ ≤ẋ_t_0+ Wζ + ∫_0^ζ (ẋ_t_0+W ζ)exp(κ_Aζ)) d_ξ = (ẋ_t_0+ Wζ)(1 + ζexp(κ_Aζ)) ≤(κ_A x_t_0 + κ_B κ D_1 + W_0/γ + W + Wh )(1 + hexp(κ_Ah)) ≤((κ_A+ κ_B κ) D_1 + W_0/γ + W + Wh )(1 + hexp(κ_Ah)) ≤((κ_A+ κ_B κ) D_1 + W_0/γ + 2W ) (1+exp(κ_A)) , where the first inequality is by the relation ξ≤ζ, the second inequality is by the relation ζ≤ h and the bounding property of first-order derivative, the third inequality is by the induction hypothesis and the last inequality is due to h ≤ 1. By the relation ẍ_t ≤κ_Aẋ_t + W, we have ẍ_t_0 + ζ ≤κ_A D_2 + W . So we choose D_3 = κ_A D_2 + W. By the equation <ref>, we have ŵ_t - w_t = x_t+1 - x_t - h(Ax_t + Bu_t + w_t)/h = x_t+1 - x_t - hẋ_t/h = ∫_0^h (ẋ_t+ξ - ẋ_t)dξ/h = ∫_0^h∫_0^ξẍ_t+ζ dζ dξ/h ≤∫_0^h∫_0^ξẍ_t+ζ dζ dξ/h ≤ hD_3 , where in the second line we use the Newton-Leibniz formula, the inequality is by the conclusion ẍ_t ≤ D_3 which we have proved before. By Assumption <ref>, we have ŵ_t ≤ W + hD_3 . Choosing D_3 = κ_A D_2 + W, W_0 = W + h D_3 = W + h(κ_A D_2 + W), we get ẋ_t_0 + ζ ≤ ((κ_A+ κ_B κ) D_1 + W_0/γ + 2W) (1+exp(κ_A)) ≤ ((κ_A+ κ_B κ) D_1 + W + h(κ_A D_2 + W)/γ + 2W) (1+exp(κ_A)) ≤ D_2 (hκ_A/γ (1+exp(κ_A))) + ((κ_A+ κ_B κ) D_1 + (1+h+2γ)W/γ)(1+exp(κ_A))) . Using the notation β_1 = hκ_A/γ (1+exp(κ_A)) , β_2 = ((κ_A+ κ_B κ) D_1 + 2(1+γ)W/γ)(1+exp(κ_A)) . When h < γ/2κ_A(1 + exp(κ_A)), we have β_1 < 1/2. Taking D_2 = 2 β_2 we get ẋ_t_0 + ζ ≤β_1 D_2 + β_2 ≤ D_2 . So we have proved that for any t ∈ [t_0, t_0 + h], ẋ_t≤ D_2, ẍ_t≤ D_3, ŵ_t≤ W_0. Then we choose suitable D_1 and prove that for any t ∈ [t_0, t_0 + h], x_t≤ D_1. Using Lemma <ref>, we have x_t+1 = h∑_i=0^tΨ_t, iŵ_t-i . By the induction hypothesis of bounded state and estimation noise in [0, t_0] together with Lemma <ref>, we have x_t+1 ≤ h ∑_i=0^t (lh κ_B + 1) κ^2(1-hγ)^i ( W + h D_3) ≤(lh κ_B + 1) κ^2( W + h D_3)/γ . Then, by the Taylor expansion and the inequality ẋ_t ≤ D_2 , we have for any ζ∈ [0,h], x_t+1 - x_t+ζ = ∫^h_ζẋ_t+ξ dξ≤ (h-ζ) D_2 ≤ hD_2 . Therefore we have x_t+ζ ≤x_t+1 + hD_2 ≤(lh κ_B + 1) κ^2( W + h D_3)/γ + hD_2 = (lh κ_B + 1) κ^2 W(1+h)/γ + hD_2 ((lh κ_B + 1) κ^2 κ_A/γ+1) ≤(l κ_B + 1) 2κ^2 W/γ + hD_2 ((l κ_B + 1) κ^2 κ_A/γ+1) . In the last inequality we use h ≤ 1. By the relation D_2 = β_2 / (1-β_1) and β_1 ≤1/2, we know that D_2 ≤ 2 ((κ_A+ κ_B κ) D_1 + 2(1+γ)W/γ)(1+exp(κ_A)). Using the notation γ_1 = 2 h(κ_A+ κ_B κ)(1+exp(κ_A)) , γ_2 = (l κ_B + 1) 2κ^2 W/γ + 4(1+γ)W/γ(1+exp(κ_A))((l κ_B + 1) κ^2 κ_A/γ+1) . We have x_t+ζ≤γ_1 D_1 +γ_2. From the equation of γ_1 we know that when h ≤1/4(κ_A+ κ_B κ)(1+exp(κ_A)) we have γ_1 ≤1/2. Then we choose D_1 = 2γ_2, we finally get x_t+ζ ≤γ_1 D_1 +γ_2 ≤ D_1 . Finally, set h_0 = min{1, γ/κ_A(1+exp(κ_A)), 1/4(κ_A+ κ_B κ)(1+exp(κ_A))} , By the relationship D_1 = 2γ_2, D_2 = 2β_2, D_3 = κ_A D_2 + W, W_0 = W + hD_3, we can verify the induction hypothesis. Moreover, we know that D_1, D_2, D_3 are not depend on h. Therefore we have proved the claim. The last step is then to bound the action and the approximation errors of states and actions. * By Lemma <ref>, we have Ψ_t, i≤ a(lh κ_B + 1) κ^2(1-hγ)^i-1 . By Lemma <ref> we know that for any h in [0, h_0], where h_0 = min{1, γ/κ_A(1+exp(κ_A)), 1/4(κ_A+ κ_B κ)(1+exp(κ_A))} , we have x_t≤ D_1. By Lemma <ref>, Lemma <ref> and Lemma <ref>, we have y_t+1 = h∑_i=0^2HmΨ_t, iŵ_t-i ≤ h W_0 ∑_i=0^2Hm a(lh κ_B + 1) κ^2 (1-hγ)^i-1 ≤ aW_0(lh κ_B + 1) κ^2/γ = D_1 . Via the definition of x_t, y_t, we have x_t-y_t≤κ^2(1-hγ)^Hm+1x_t-Hm≤κ^2(1-hγ)^Hm+1 D_1 . For the actions u_t= -K x_t+h∑_i=1^Hm M_t^i ŵ_t-i , v_t= -K y_t+h∑_i=1^Hm M_t^i ŵ_t-i , we can derive the bound u_t≤ K x_t+h∑_i=1^HmM_t^iŵ_t-i≤κx_t+W_0 h ∑_i=1^Hm a(1-hγ)^i-1≤κ D_1 + aW_0/γ , v_t≤ K y_t+h∑_i=1^HmM_t^iŵ_t-i≤κy_t+W_0 h ∑_i=1^Hm a(1-hγ)^i-1≤κD_1 + aW_0/γ , u_t-v_t≤ Kx_t-y_t≤κ^3(1-hγ)^Hm+1 D_1 . Taking D = max{D_1, D_1, κ D_1 + W_0/γ, κD_1 + W_0/γ}, we get x_t, y_t, u_t, v_t≤ D. We also have x_t - y_t+ u_t - v_t≤κ^2(1-hγ)^Hm+1 D_1 + κ^3(1-hγ)^Hm+1 D_1 ≤κ^2(1+κ)(1-hγ)^Hm+1 D . In particular, the optimal policy can be recognized as taking the DAC policy with all the M_t equal to 0 and the fixed strongly stable policy K = K^*. So we also have x^*_t, u^*_t≤ D. Now we have finished the analysis of evolution of the states. It will be helpful to prove the key lemmas in this paper. § PROOF OF LEMMA <REF> In this section we will prove the following lemma: * By Assumption <ref> and Lemma <ref>, we have |c_t(x_t, u_t) - c_ih(x_ih, u_ih)| ≤max_x,u(∇_x c_t(x, u)+∇_u c_t(x, u))(t-ih) ≤ GD (t-ih) . Therefore we have | ∑_i=0^n-1∫_ih^(i+1)h c_t(x_t, u_t) dt - h ∑_i=0^n-1 c_i(x_i, u_i)| = |∑_i=0^n-1∫_ih^(i+1)h (c_t(x_t, u_t)- c_ih(x_ih, u_ih))dt | ≤ GD ∑_i=0^n-1∫_ih^(i+1)h (t-ih) dt ≤ GDnh^2 = GDhT . A similar bound can easily be established by lemma <ref> about the optimal state and policy: |∑_i=0^n-1∫_ih^(i+1)h c_t(x^∗_t, u^∗_t) dt - ∑_i=0^n-1 c_i(x^*_i, u^*_i)| ≤ GDhT . Taking sum of the two terms we get R_0 ≤ 2GDhT. § PROOF OF LEMMA <REF> In this section we will prove the following lemma: * Using Lemma <ref> and Assumption <ref>, have the approximation error between ideal cost and actual cost bounded as, |c_t(x_t, u_t)-c_t(y_t, v_t)| ≤|c_t(x_t, u_t)-c_t(y_t, u_t)| + |c_t(y_t, u_t)-c_t(y_t, v_t)| ≤ G D x_t - y_t + G D u_t - v_t ≤ G D^2 κ^2(1+κ)(1-hγ)^Hm+1 , where the first inequality is by triangle inequality, the second inequality is by Assumption <ref>, Lemma <ref>, and the third inequality is by Lemma <ref>. With this, we have R_1 = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - f_i(M_i-H, ... , M_i)) = ∑_i=0^p - 1(∑_j=im^(i+1)m - 1 c_i(x_i, u_i) - ∑_j=im^(i+1)m - 1 c_i(y_i, v_i)) ≤∑_i=0^p - 1∑_j=im^(i+1)m - 1G D^2 κ^2(1+κ)(1-hγ)^Hm+1≤ nG D^2 κ^2(1+κ)(1-hγ)^Hm+1 . § PROOF OF LEMMA <REF> Before we start the proof of Lemma <ref>, we first present an overview of the online convex optimization (OCO) with memory framework. Consider the setting where, for every t, an online player chooses some point x_t ∈ 𝒦⊂ℝ^d, a loss function f_t: 𝒦^H+1↦ℝ is revealed, and the learner suffers a loss of f_t(x_t-H, …, x_t). We assume a certain coordinate-wise Lipschitz regularity on f_t of the form such that, for any j ∈{1, …, H}, for any x_1, …, x_H, x_j ∈𝒦 |f_t(x_1, …, x_j, …, x_H)-f_t(x_1, …, x_j, …, x_H)| ≤ Lx_j-x_j . In addition, we define f_t(x)=f_t(x, …, x), and we let G_f=sup_t ∈{1, …, T}, x ∈𝒦∇f_t(x), D_f=sup _x, y ∈𝒦x-y . The resulting goal is to minimize the policy regret, which is defined as Regret=∑_t=H^T f_t(x_t-H, …, x_t)-min _x ∈𝒦∑_t=H^T f_t(x, …, x) . To minimize this regret, a commonly used algorithm is the Online Gradient descent. By running the Algorithm <ref>, we may bound the policy regret by the following lemma: Let {f_t}_t=1^T be Lipschitz continuous loss functions with memory such that f_t are convex. Then by runnning algorithm <ref> itgenerates a sequence {x_t}_t=1^T such that ∑_t=H^T f_t(x_t-H, …, x_t)-min _x ∈𝒦∑_t=H^T f_t(x, …, x) ≤D^2_f/η+T G_f^2 η+L H^2 η G_f T . Furthermore, setting η=D_f/√(G_f(G_f+L H^2) T) implies that PolicyRegret ≤ 2D_f √(G_f(G_f+L H^2) T) . By the standard OGD analysis <cit.>, we know that ∑_t=H^T f_t(x_t)-min _x ∈𝒦∑_t=H^T f_t(x) ≤D^2_f/η+T G^2 η . In addition, we know by the Lipschitz property, for any t ≥ H, we have |f_t(x_t-H, …, x_t)-f_t(x_t, …, x_t)| ≤ L ∑_j=1^Hx_t-x_t-j≤ L ∑_j=1^H ∑_l=1^jx_t-l+1-x_t-l ≤ L ∑_j=1^H ∑_l=1^j η∇f_t-l(x_t-l)≤ L H^2 η G, and so we have that |∑_t=H^T f_t(x_t-H, …, x_t)-∑_t=H^T f_t(x_t, …, x_t)| ≤ T L H^2 η G . It follows that ∑_t=H^T f_t(x_t-H, …, x_t)-min _x ∈𝒦∑_t=H^T f_t(x, …, x) ≤D^2_f/η+T G_f^2 η+L H^2 η G_f T . In this setup, the first term corresponds to the DAC policy we make, and the second term is used to approximate the optimal strongly stable linear policy. It is worth noting that the cost of OCO with memory depends on the update frequency H. Therefore, we propose a two-level online controller. The higher-level controller updates the policy with accumulated feedback at a low frequency to reduce the regret, whereas a lower-level controller provides high-frequency updates of the DAC policy to reduce the discretization error. In the following part, we define the update distance of the DAC policy as l = Hm, where m is the ratio of frequency between the DAC policy update and OCO memory policy update. Formally, we update the value of M_t once every m transitions, where g_t represents a loss function. M_t+1 = { Π_ℳ(M_t-η∇ g_t(M)) if t % m == 0 M_t otherwise . . From now on, we denote M_t = M_tm for the convenience to remove the duplicate elements. By the definition of ideal cost, we know that it is a well-defined definition. By Lemma <ref> we know that y_t+1 =h∑_i=0^2HmΨ_t, iŵ_t-i, v_t =-K y_t+h∑_j=1^Hm M_t^jŵ_t-j , where Ψ_t, i=Q^i_h 1_i ≤ l+h∑_j=0^l Q^j_h B M_t-j^i-j1_i-j ∈[1, l] . So we know that y_t and y_t are linear combination of M_t, therefore f_i(M_i-H, …, M_i)=∑_t = im^(i+1)m - 1c_t(y_t(M_i-H, …, M_i), v_t(M_i-H, …, M_i)) . is convex in M_t. So we can use the OCO with memory structure to solve this problem. By Lemma <ref> we know that y_t and v_t are bounded by D. Then we need to calculate the diameter, Lipchitz constant, and gradient bound of this function f_i. In the following, we choose the DAC policy parameter l = Hm. (Bounding the diameter) We have D_f = sup_M_i, M_j ∈ℳM_i - M_j≤2a/hγ . By the definition of ℳ, taking l = Hm we know that sup_M_i, M_j ∈ℳM_i - M_j ≤∑_k=1^HmM^k_i - M^k_j ≤∑_k=1^Hm 2 a(1-hγ)^k-1 ≤2a/hγ . (Bounding the Lipschitz Constant) Consider two policy sequences {M_i-H…M_i-k…M_i} and {M_i-H…M̂_i-k…M_i} which differ in exactly one policy played at a time step t-k for k ∈{0, …, H}. Then we have that |f_i(M_i-H…M_i-k…M_i)-f_i(M_i-H…M̂_i-k…M_i)| ≤ C^2 κ^3 κ_B W_0 ∑_j=0^HmM^j_i-k - M̂^j_i-k , where C is a constant. By the definition we have y_t-y_t = h∑_i=0^2Hm h∑_j=0^Hm Q^j_h B (M_t-j^i-j - M_t-j^i-j) 1_i-j ∈[1, Hm]ŵ_t-i ≤ h^2 κ^2 κ_B W_0 ∑_i=0^2Hm∑_j=0^HmM_t-j^i-j - M_t-j^i-j1_i-j ∈[1, Hm] ≤ h^2 κ^2 κ_B W_0 m ∑_j=0^HmM^j_i-k - M̂^j_i-k = hC κ^2 κ_B W_0 ∑_j=0^HmM^j_i-k - M̂^j_i-k . Where the first inequality is by Q^j_h≤κ^2 (1-hγ)^j-1≤κ^2 and lemma <ref> of bounded estimation disturbance, the second inequality is by the fact that M_i-k have taken m times, the last equality is by m = C/h. Furthermore, we have that v_t-v_t =-K(y_t-y_t) ≤ hC κ^3 κ_B W_0 ∑_j=0^HmM_i-k^j-M̂_i-k^j . Therefore using Assumption <ref>, Lemma <ref> and Lemma <ref> we immediately get that |f_i(M_i-H…M_i-k…M_i)-f_i(M_i-H…M̂_i-k…M_i)| ≤ C^2 κ^3 κ_B W_0 ∑_j=0^HmM^j_i-k - M̂^j_i-k . (Bounding the Gradient) We have the following bound for the gradient: ∇_M f_t(M … M)_F ≤GDC κ^2(κ+1) W_0 κ_B/γ Since M is a matrix, the ℓ_2 norm of the gradient ∇_M f_t corresponds to the Frobenius norm of the ∇_M f_t matrix. So it will be sufficient to derive an absolute value bound on ∇_M_p, q^[r] f_t(M, …, M) for all r, p, q. To this end, we consider the following calculation. Using lemma <ref> we get that y_t(M … M), v_t(M … M) ≤ D. Therefore, using Assumption <ref> we have that |∇_M_p, q^[r] c_t(M … M)| ≤ G D(∂ y_t(M)/∂ M_p, q^[r]+∂ v_t(M … M)/∂ M_p, q^[r]) . We now bound the quantities on the right-hand side: δ y_t(M … M)/δ M_p, q^[r] =h∑_i=0^2 Hm h∑_j=1^Hm[∂ Q_h^j B M^[i-j]/∂ M_p, q^[r]] ŵ_t-i1_i-j ∈[1, H] ≤ h^2∑_i=r^r+Hm[∂ Q_h^i-r B M^[r]/∂ M_p, q^[r]] w_t-i ≤ h^2 κ^2 W_0 κ_B 1/hγ = h κ^2 W_0 κ_B/γ . Similarly, ∂ v_t(M … M)/∂ M_p, q^[r]≤κδ y_t(M … M)/δ M_p, q^[r]≤κh κ^2 W_0 κ_B/γ≤h κ^3 W_0 κ_B/γ . Combining the above inequalities with f_i(M_i-H, …, M_i)=∑_t = im^(i+1)m - 1c_t(y_t(M_i-H, …, M_i), v_t(M_i-H, …, M_i)) . gives the bound that ∇_M f_t(M … M)_F ≤GDC κ^2(κ+1) W_0 κ_B/γ . Finally we prove Lemma <ref>: * By Lemma <ref> we have R_2 ≤ 2D_f √(G_f(G_f+L H^2) p) By Lemma <ref>, Lemma <ref>, and Lemma <ref> we have R_2 ≤ 2D_f √(G_f(G_f+L H^2) p) ≤ 2 2a/hγ√(GDC κ^2(κ+1) W_0 κ_B/γ(GDC κ^2(κ+1) W_0 κ_B/γ+ C^2 κ^3 κ_B W_0 H^2) n/m) ≤4a/γ√(GDC^2 κ^2(κ+1) W_0 κ_B/γ(GDC κ^2(κ+1) W_0 κ_B/γ+ C^2 κ^3 κ_B W_0 H^2) n/h) . § PROOF OF LEMMA <REF> In this section, we will prove the approximation value of DAC policy and optimal policy is sufficiently small. First, we introduce the following: For any two (κ, γ)-strongly stable matrices K^*, K, there exists M=(M^1, …, M^Hm) where M^i=(K - K^*)(I + h(A-B K^*))^i-1 , such that c_t(x_t(M), u_t(M))- c_t(x^*_t, u^*_t) ≤ GDW_0κ^3 a(lh κ_B + 1)(1-hγ)^Hm . Denote Q_h(K) = I + h(A-BK), Q_h(K^*) = I + h(A-BK^*). By Lemma <ref> we have x^*_t+1=h∑_i=0^tQ^i_h(K^*) ŵ_t-i . Consider the following calculation for i ≤ Hm and M^i=(K - K^*)(I + h(A-B K^*))^i-1: Ψ_t, i(M, …, M) =Q^i_h(K) +h∑_j=1^i Q^i-j_h(K) B M^j =Q^i_h(K) +h∑_j=1^i Q^i-j_h(K) B(K - K^*) Q^j-1_h(K^*) =Q^i_h(K) +∑_j=1^i Q^i-j_h(K) (Q_h(K^*) - Q_h(K)) Q^j-1_h(K^*) =Q^i_h(K^*) , where the final equality follows as the sum telescopes. Therefore, we have that x_t+1(M)=h∑_i=0^HmQ^i_h(K^*) ŵ_t-i + h∑_i=Hm+1^tΨ_t, iŵ_t-i . Then we obtain that x_t+1(M)-x^*_t+1≤ hW_0∑_i=Hm+1^t(Ψ_t, i(M_*) + Q^i_h(K^*)) . Using Definition <ref> and Lemma <ref> we finally get x_t+1(M)-x^*_t+1 ≤ hW_0 (∑_i=Hm+1^t ((lh κ_B + 1) aκ^2(1-hγ)^i-1) + κ^2 (1-hγ)^i) ≤ W_0 (lh κ_B + 2) aκ^2(1-hγ)^Hm . We also have u^*_t-u_t(M) =-K^* x^*_t+K x_t(M)-h∑_i=0^Hm M^i ŵ_t-i = (K-K^*)x^*_t + K(x_t(M) - x^*_t) -h∑_i=0^Hm M^i ŵ_t-i = (K-K^*)h∑_i=0^t-1Q^i_h(K^*) ŵ_t-i + K(x_t(M) - x^*_t) -h∑_i=0^Hm M^i ŵ_t-i = K(x_t(M) - x^*_t) -h∑_i=Hm+1^t-1 (K-K^*)Q^i-1_h(K^*) ŵ_t-i = Kh∑_i=Hm+1^t-1 (Ψ_t, i-Q^i-1_h(K^*))ŵ_t-i -h∑_i=Hm+1^t-1 (K-K^*)Q^i-1_h(K^*) ŵ_t-i = h∑_i=Hm+1^t-1 K^*(Q^i-1_h(K^*)+Ψ_t, i) ŵ_t-i ≤ W_0 κ ((1-hγ)^Hm + a(lh κ_B + 1) κ^2(1-hγ)^Hm) = W_0 κ (a(lh κ_B + 1) κ^2 + 1)(1-hγ)^Hm) , where the inequality is by Definition <ref> and Lemma <ref>. Finally, we have |c_t(x_t(M), u_t(M))-c_t(x^*_t, u^*_t)| ≤ |c_t(x_t(M), u_t(M))-c_t(x^*_t, u_t(M))| + |c_t(x^*_t, u_t(M))-c_t(x^*_t, u^*_t)| ≤ GD |x_t(M) - x^*_t| + GD |u_t(M) - u^*_t| ≤ GDW_0κ^3 a(lh κ_B + 1)(1-hγ)^Hm , where the second inequality is by Assumption <ref>. Then we can prove our main lemma: * By choosing M^i=(K - K^*)(I + h(A-B K^*))^i-1 . We know that M^i=(K - K^*)(I + h(A-B K^*))^i-1≤ 2κ^3 (1-γ)^i-1 . Therefore choose a = 2κ^3 we have M = {M^i} in the DAC policy update class ℳ. Then we have the analysis of the regret: R_3 = min_M∈ℳ∑_i=0^p - 1 f_i(M, ... , M)- ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) ≤min_M∈ℳ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x_i(M), u_i(M))- ∑_i=0^p - 1∑_j=im^(i+1)m - 1 c_i(x^*_i, u^*_i) + nκ^2(1+κ)(1-hγ)^Hm+1 D ≤ 3n(1-hγ)^HmGDW_0κ^3 a(lh κ_B + 1) , where the first inequality is by Lemma <ref> and the second inequality is by Lemma <ref>.
http://arxiv.org/abs/2306.08433v1
20230614110838
"Definition Modeling: To model definitions." Generating Definitions With Little to No Semantics
[ "Vincent Segonne", "Timothee Mickus" ]
cs.CL
[ "cs.CL" ]
Construction of Antisymmetric Variational Quantum States with Real-Space Representation Hirotoshi Hirai July 31, 2023 ========================================================================================= Definition Modeling, the task of generating definitions, was first proposed as a means to evaluate the semantic quality of word embeddings—a coherent lexical semantic representations of a word in context should contain all the information necessary to generate its definition. The relative novelty of this task entails that we do not know which factors are actually relied upon by a Definition Modeling system. In this paper, we present evidence that the task may not involve as much semantics as one might expect: we show how an earlier model from the literature is both rather insensitive to semantic aspects such as explicit polysemy, as well as reliant on formal similarities between headwords and words occurring in its glosses, casting doubt on the validity of the task as a means to evaluate embeddings. Defmod c'est vendu comme: - une tache pour l'éval sémantique des plongements - qui permet par exemple de se dispenser d'un inventaire de sens pour faire de la WSD nous ment-on sur la marchandise? quels sont les facteurs qui font qu'un modèle de defmod marche ou pas? produit une bonne def ou non? 1. la polysémie explicite / le nombre d'exemple par définition n'ont pas l'air d'avoir un grand impact sur les résultats (BLEUs très corrélés, suggérant des définitions très similaires) 2. qu'en est-il de la fréquence? BLEU seul n'a pas l'air de faire de différence, mais c'est la peine de regarder manuellement 3. une annot manuelle rend évident le fait que le modèle se base sur des patterns > remplacer le mot à définir par une sentinelle fait chuter catastrophiquement les résultats 4. est-ce à dire que BLEU est mal adapté à l'évaluation de la defmod? - bleu par freq - bleu par pattern / non-pattern - corrél de BLEU avec d'autres métriques de NLG ————————————————————– VS - title : Too Strong assertion ? We only test one method for definition modeling -> Defintion Modeling : to model definition or how morphology impacts defintion modeling systems. - général : - la notion de BLEU comme métrique d'éval ? - Add training setup (time, gpus etc) -> may be in appendix - table 2 : pas compris / pas expliquée - Mettre des plots plutôt que des nombres ? -> bouger en annexe - Conclusion - Add limitations and Future work ? - Limitation : we only tested one model (mbart) with one method (control tokens) - TODO : - Add std + mean table1 OK (footnote was on previous page, Timothee moved it) - Hyperparameters Search OK § INTRODUCTION Definition Modeling <cit.> is a recently introduced NLP task that focuses on generating a definition gloss given a term to be defined; most implementations rely on an example of usage as auxiliary input <cit.>. In the last few years, it has been the focus of more than a few research works: datasets have been proposed for languages ranging from Japanese <cit.> to Wolastoqey <cit.>, and DefMod has even been the subject of a recent SemEval shared task <cit.>. Practical applications for DefMod abound, from the generation of lexicographic data for low-resource languages <cit.>, to computer-assisted language learning <cit.>, creating learners' dictionaries <cit.>, and from explaining slang <cit.> to clarifying scientific terminology <cit.>. Yet, it was initially conceived by <cit.> as an evaluation task for word embeddings. If a word embedding is a coherent lexical semantic representation, then it ought to contain all the information necessary to produce a coherent gloss. Researchers have kept this semantic aspect firmly in mind: for instance, <cit.> argue that DefMod provides a means to dispense word-sense disambiguation (WSD) applications from fixed, rigid sense inventories. More broadly, dictionaries in NLP are often used to capture some aspect of semantics. This point bears closer inquiry. One may expect that writing definitions requires some knowledge of the meaning of the headword, but little has been done to confirm this expectation. Here, we focus on empirically verifying what impacts a model's ability to generate valid definitions. As such, our interest lies mostly in examining what factors in the performance of a successful Definition Modeling system, rather than in the engineering aspects of DefMod implementations. We therefore re-purpose the fine-tuning protocol of <cit.> to train a BART model <cit.> to generate definitions, which we subsequently evaluate on infrequent words: As have extensively demonstrated the quality of their model on English data, it is suitable for our own endeavor. Our findings suggest that it is possible to generate definition with little semantic knowledge: Our DefMod system, far from manipulating semantic information, mostly relies on identifying morphological exponents and tying them to lexicographic patterns. Semantic aspects of the headword—e.g., its polysemy or frequency—do not appear to weigh on model performances as captured through automatic metrics. In this paper, we follow the same view that DefMod systems should be constructed with potential real-world application. However, instead of narrowing down our scope to a specific application and adopting a practical, down-to-earth approach, we focus on building a theoretical understanding of the current challenges DefMod system are faced with. Namely, we study the performances of DefMod systems on rare words, for three reasons. First, the work of lexicographers is more likely to cover frequent words, as many dictionary-makers espouse a data-driven approach to determine which words should be included in general dictionaries, and which should be consigned to specialized dictionaries <cit.>. More generally, modern lexicographic practice relies on corpus evidence <cit.>; words for which such evidence is lacking will therefore not be the primary focus of dictionary-makers.[ As <cit.> state in their introductory paragraph: “Thanks to word-frequency statistics from corpora, lexicographers can better assess what might be considered core vocabulary to be included in dictionaries for language learners, and what rarely used words would be unhelpful to employ in controlled defining vocabularies.” Dictionary websites often include a notice about relying on usage data to select entries, see for instance that of Merriam-Webster (<https://www.merriam-webster.com/help/faq-words-into-dictionary>), or the notice of Dictionary.com (<https://www.dictionary.com/e/getting-words-into-dictionaries/)>. ] Second, the rarity of a word also likely entails that readers will be less familiar with it: one can therefore expect that such words would be more likely to require a definition. Third, focusing on rare words also allows us to consider the task DefMod both as an NLU evaluation framework—whereby we test models on harder examples for which human knowledge ought to be required—on top of addressing the concerns of a real-world application for DefMod. § RELATED WORKS There is a broad domain of research that focuses on NLP solutions to lexicography problems and assessing how suitable they are <cit.>. Conversely, many NLP works have used dictionaries to address semantic tasks, such as hypernym or synonym detection <cit.> word-sense-disambiguation <cit.>, compositional semantics <cit.>, interpretability <cit.>, representation learning <cit.> or word retrieval <cit.>. We more narrowly concerned ourselves with definition modeling <cit.>, formulated as a sequence-to-sequence task <cit.>. Our fine-tuning approach is borrowed from <cit.>; note that <cit.> also employed a PLM <cit.>. We refer readers to <cit.> for a more thorough introduction. § MODEL & DATASET Datasets We retrieve data from DBnary <cit.>,[ <http://kaiko.getalp.org/about-dbnary/> ] an RDF-formatted dump of Wiktionary projects.[ <http://wiktionary.org/> ] This source of data has previously been used to build DefMod datasets <cit.>, and is available in multiple languages—a desirable trait for future replication studies. More details are provided in <Ref>. For each term to be defined, we also tabulate its number of occurrences by tallying the number of string matches in a random subset of 5M documents from the deduplicated English Oscar corpus <cit.>. Headword frequency is worth focusing on, for at least two reasons. First, lexicographers are more likely to cover frequent words: dictionary-makers often espouse a data-driven approach to determine whether words should be included in general or specialized dictionaries <cit.>;[ Lack of corpus evidence may also be reason enough for lexicographers to ignore rarer words <cit.>. Dictionaries often rely on usage data to select entries (e.g., <https://www.merriam-webster.com/help/faq-words-into-dictionary>) ] Second, dictionary users should also be less familiar with rarer words—and likely require definitions. Hence, we set aside definitions where the headword has five or fewer occurrences in our Oscar subset for test purposes only, and further distinguish low-frequency headwords depending on whether they are attested in our Oscar sample. Remaining headwords are then split 80–10–10 between train, validation, and a second held out test set, so as to also measure models on identically distributed items. As such, we have three test sets, distinguished by the frequency of the headword in our Oscar sample: We note as #=0 the test set comprised of forms unattested in the sample; #≤ 5 corresponds to headwords with five or fewer occurrences; #>5 matches with train set and validation set conditions. Model The core of our methodology is borrowed from <cit.>: we fine-tune a generative pretrained language model, namely BART <cit.>, to produce an output gloss given an input example of usage, where the term to be defined is highlighted by means of special tokens and . We justify our adoption of their methodology by the fact that they report high results, through extensive NLG and WSD evaluation: as such, the approach they propose is representative of successful modern approaches to DefMod, and is suitable for a study such as ours. We refer the reader to their paper and <Ref> for details. We expect DefMod systems to be sensitive to the variety of examples of usages and number of target glosses: more examples of usage should lead to higher performances, whereas not exposing the model to polysemy should be detrimental. This can be tested by down-sampling the training set, so as to select one gloss per headword (1G or ∀G) and/or one example of usage per gloss (1E or ∀E). This leads us to defining four related models: ∀G∀E, ∀G1E, 1G∀E, and 1G1E. [ Using this notation, 1G∀E means that, for a given headword, we randomly selected one gloss with all its corresponding examples; for ∀G1E, all glosses were considered but with only one randomly selected example for each. ] § IMPACT OF FREQUENCY, POLYSEMY AND CONTEXTUAL DIVERSITY Corresponding results in terms of BLEU, shown in <Ref>, are in line with similar results on unseen headwords e.g. in <cit.>.[ We observed similar patterns with most widely-used automatic NLG metrics, and focus on BLEU in the present article for brievity. Nonetheless, see e.g. <cit.> for a discussion of the limitations of this metric. ] They also highlight a strikingly consistent behavior across all four configurations: Mann-Whitney U tests stress that we do not observe lower performances for rarer words, as one would naively expect, except in few cases (∀G∀E, ∀G1E and 1G1E models, when comparing unattested and rare headwords) with relatively high p-values given the sample sizes (p > 0.01 always). Another way to stress the lack of effect related to explicit polysemy or contextual diversity consists in correlating BLEU scores across models: Comparing the BLEU scores obtained by one model (say the ∀G∀E) to those of another model (e.g., the 1G1E model) indicates whether they behave differently or whether BLEU scores are distributed in roughly the same fashion. We systematically observe very high Pearson coefficients (0.82 < r < 0.90). In other words, definitions that are poorly handled in any model will in all likelihood be poorly handled in all other models, and definitions that are easy for any single model will be easy for all other models. We provide a breakdown per split and per model in <Ref>, <Ref>. § DIGGING FURTHER: MANUAL EVALUATION To better understand model behavior, we sample 50 outputs of the ∀G∀E model, per BLEU quartile, for the validation split and our three test splits. We then annotate these 800 items as follows. §.§ Annotation scheme Sample items for all annotations are provided in <Ref>. Fluency (FL) measures if the output is free of grammar or commonsense mistakes. For instance, “” is rated with a FL of 1, and “” is rated 5. Factuality (FA) consists in ensuring that generated glosses contain only and all the facts relevant to the target senses. Hence the output “” generated for the headword unsatined is annotated with a FA of 1, whereas the output “” for the headword flaglet is rated with a FA of 5. PoS-appropriateness (PA) A PoS-appropriate output defines its headwords using a phrase that match its part of speech—e.g., defining adjective with adjectival phrases and nouns with noun phrases. As such, the adjective headword fried yields the PoS-inappropriate “”, while the production for the verb unsubstantiate, viz. “” has a PA of 1. Pattern-based construction (PB) An output is said to display a pattern-based construction whenever it contains only words that are semantically tenuous or morphologically related to the headword. The headword clacky thus yield the PB output “”, and the headword fare yields the non-PB production “” [ FA and FL are on a 5-point scale, PA and PB are binary. ] §.§ Results of the manual evaluation When looking at all 800 annotations, we find that outputs tend to be fluent (average FL of 4.37) and overwhelmingly PoS-appropriate ( 95%). They frequently involve patterns (36.5% of PBs) which often involve a straight copy of the headword (10% of all productions). On the other hand, factuality is lacking (average FL of 2.69). Inter-annotator agreement To quantify how consensual our annotations are, we randomly sample 200 items for dual annotation. Results in <Ref> highlight that, while the two annotators have different sensibilities as to the magnitude of the mistakes in FL and FA (as shown by the low κ), relative judgments on fluency and factuality are consistent (as shown by ρ and r). Hence, we z-normalize FA and FL in the rest of this analysis. Effects of patterns Mann-Whitney U-tests on FA and FL annotations show that non-pattern-based outputs are statistically rated with lower FL (p < 3 · 10^-6, common language effect size f = 42.3%) and lower FA (p < 2 · 10^-9, f = 37.7%) than pattern-based definitions, despite no significant difference in BLEU scores (p = 0.262). On the other hand, BLEU scores are correlated with FL and FA ratings (Spearman ρ = 0.094 and ρ = 0.276 respectively). In sum, the morphologically complex nature of a headword drives much of the behavior of our DefMod system. While BLEU captures some crucial aspects we expect to be assessed in DefMod, it is still impervious to this key factor. To further confirm that patterns are indeed crucial to a DefMod system's performance, we train a model on data where headwords have been removed from examples of usages, keeping the surrounding control tokens. This in effect creates a 2-token sentinel for which the decoder must generate a gloss, and deprives the model of information about headword form. BLEU scores drastically drop with this ablated train set, as shown in <Ref>. We also find unattested headwords yielding statistically lower BLEUs than rare headwords, which in turn yield lower BLEUs than the other two splits (Mann-Whitney U tests, p < 10^-7). Frequency and polysemy We now return to polysemy and word frequency. We consider as an indicator of word polysemy the number of definitions for that headword present in our corpus, whereas we rely on our Oscar sample to derive frequency counts. Frequency and definition counts appear to be highly correlated (Spearman ρ = 0.406), and both also anti-correlate with PB (ρ=-0.1143 and ρ=-0.111 respectively), i.e., rare, monosemous words are defined by the model with patterns (that is, they are likely morphologically complex). We also observe an anticorrelation between FL and definition count (Spearman ρ=-0.105), which could be explained by the fact that patterns tend to yield more fluent outputs, as we just saw—however, as we do not observe a correlation between frequency and FL, the interaction between FL and polysemy (as measured by definition count) is likely not so straightforward.[ Neither do we observe no correlation with FA and PA. ] Finally, BLEU scores do not correlate with word frequency nor definition counts, which strengthens our claim that this DefMod system makes limited use semantic information to generate glosses—if at all. Alternatives to BLEU These annotations leave one question unanswered: is BLEU an adequate means of measuring DefMod productions? In <Ref>, we compare the Spearman correlation coefficient of various NLG metrics with our FA and FL annotations. Most NLG metrics do not correlate with fluency ratings: we posit this is due to the overwhelming majority of highly fluent productions in our sample. As for BLEU, it doesn't produce the highest (anti-)correlations—they are instead attested with BertScore for FL and ROUGE-L for FA. Lastly, Mann-Whitney U tests comparing metrics with respect to PB annotations indicate that most of these are not sensitive to the presence or absence of a pattern, with the exception of chrF (f=0.43) and TER (f=0.42). In all, our annotated sample suggests that most NLG metrics appear to display a behavior similar to BLEU: they capture factuality to some extent—but not the importance of patterns. § CONCLUSIONS In this work, we have presented how an earlier Definition Modeling system was able to achieve reasonable performances and produce fluent outputs, although the factual validity leave much to be desired. This behavior is almost entirely due to morphologically complex headwords, for which the model is often able to derive reasonable glosses by decomposing the headword into a base and an exponent, and mapping the exponent to one of a limited set of lexicographic patterns. The model we studied seems more sensitive to formal traits than to explicit accounts of polysemy. There are numerous limitations to this work: we focused on one specific fine-tuning approach for one specific English PLM. Nonetheless, we have shown that models can achieve reasonable performances on DefMod without relying on semantics, casting doubt on the task's usefulness for word embedding evaluation, as initially suggested by <cit.> In other words: using lexicographic data as inputs for an NLP model does not ensure that it will pick up on the semantic aspects contained therein. § ACKNOWLEDGMENTS We thank IWCS and ∗SEM reviewers for their insightful comments on this work. 0.1 -0.2 < g r a p h i c s > -0.25 < g r a p h i c s > 0.85 This work is part of the FoTran project, funded by the European Research Council (ERC) under the EU's Horizon 2020 research and innovation program (agreement № 771113).   We  also  thank  the  CSC-IT Center for Science Ltd., for computational resources. acl_natbib § HYPERPARAMETERS Models are implemented in fairseq <cit.>. We used the model and followed the instructions on the github repository for finetuning BART on the summary task.[ <https://github.com/facebookresearch/fairseq/blob/main/examples/bart/README.summarization.md> ] We used the same parameters except for the learning rate, which after some experiments, was set to 5 · 10^-6. For every configuration (∀G∀E, ∀G1E,1G∀E, 1G1E) we kept the model with the best loss on the validation dataset. § DATA PREPROCESSING In the present work, we retrieve definition glosses (i) associated with an example of usage and (ii) where the term to be defined is tagged as a noun, adjective, verb, adverb or proper noun. Like , we also consider MWEs as potential terms to define. To highlight a headword within an example of usage, the approach of <cit.> consists in surrounding them with learned task-specific control tokens. We therefore parse example of usages using SpaCy[ <https://spacy.io/> ] to retrieve the first sequence of tokens whose lemmas match with the lemmas of the term to be defined. The BART model we fine-tune on DefMod has been pretrained on OpenWebText, which contains some pages retrieved from Wiktionary. We preemptively remove these pages from all dataset splits, so as to ensure there is no overlap between pre-train, train and test data. Frequencies are tabulated on a case-folded, whitespace-normalized subset of the Oscar corpus. In practice, we extract the number of hard string matches of each headword preprended and appended with word boundaries. § BLEU SCORES CORRELATIONS In <Ref>, we display how similar are the behaviors on different models across splits. Each sub-table corresponds to a different split, and pits all combinations of models. For instance, the last cell in the second row of sub-Table <ref> indicates that to the Pearson correlation between the ∀G1E and the 1G1E on the #≤ 5 test split is above 88.4%. The crucial fact that emerges from these tables is the distribution of BLEU is very similar across all models we tested—which entails that explicit polysemy or contextual diversity do not weight on performances, as measured through BLEU scores.
http://arxiv.org/abs/2306.03774v2
20230606153222
Exploring Hybrid Linguistic Features for Turkish Text Readability
[ "Ahmet Yavuz Uluslu", "Gerold Schneider" ]
cs.CL
[ "cs.CL" ]
Notes on conformal anomaly, nonlocal effective action and the metamorphosis of the running scale W. Wachowski July 31, 2023 ================================================================================================ This paper presents the first comprehensive study on automatic readability assessment of Turkish texts. We combine state-of-the-art neural network models with linguistic features at lexical, morphosyntactic, syntactic and discourse levels to develop an advanced readability tool. We evaluate the effectiveness of traditional readability formulas compared to modern automated methods and identify key linguistic features that determine the readability of Turkish texts. § INTRODUCTION Automatic Readability Assessment (ARA) is an important task in computational linguistics that aims to automatically determine the level of difficulty of understanding a written text, which has implications for various fields, such as healthcare, education, and accessibility <cit.>. In the healthcare sector, medical practitioners can use ARA tools to ensure patient information and consent forms are easily understandable <cit.>. In the field of education, teachers and learners alike can benefit from ARA systems to adapt materials to the appropriate language proficiency level <cit.>. The appropriate readability of technical reports and other business documents is critical to ensure that the intended audience can fully understand the content and can make informed decisions <cit.>. In areas such as cyber-security, readability is particularly important as it can impact response time to risk closures and case materials <cit.>. The task of assessing readability presents challenges, particularly when dealing with large corpora of text. Manual extraction and calculation of linguistic features are time-consuming, expensive, and prone to human errors, leading to subjective labels <cit.>. Recent research in the field has focused on developing automated methods for extracting linguistic predictors and training models for readability assessment. Despite these crucial applications and developments, the readability efforts in Turkish have largely been confined to traditional readability formulas, such as Flesch-Kincaid <cit.> and its adaptations <cit.>. Several previous studies have pointed out the shortcomings of these formulas <cit.>. They typically rely on superficial text features such as sentence length and word length. The integration of complex morphological, syntactic, semantic, and discourse features in modern ARA approaches offers the possibility of significantly improving the current readability studies in Turkish. In this paper, we present the first ARA study for Turkish. Our study combines traditional raw text features with lexical, morpho-syntactic, and syntactic information to create an advanced readability assessment tool for Turkish. We demonstrate the effectiveness of our tool on a new corpus of Turkish popular science magazine articles, published for different age groups and educational levels. Our study aims to contribute to the development of automated tools for accessibility, educational research, and language learning in Turkish. The rest of the paper is organized as follows. In Section 2, we review related work on readability assessment and machine learning-based approaches. In Section 3, we describe our corpus and the linguistic features used in our study. In Section 4, we present the results of our experiments and analyze the effectiveness of our tool. Finally, in Section 5, we conclude our research and discuss future directions. § PREVIOUS WORK The research of quantifying text readability, or the ease with which a text can be read, has a history spanning over a century <cit.>. Initial research was centered on the creation of lists of difficult words and readability formulas such as Flesch Reading Ease <cit.>, Dale-Chall readability formula <cit.>, Gunning FOG Index <cit.> and SMOG <cit.>. These formulas are essentially simple weighted linear functions that utilize easily measurable variables such as word and sentence length, as well as the proportion of complex words within a text. Initially developed for the English language, the Flesch Reading Ease formula required recalibration for its application to Turkish, a task undertaken by <cit.>. However, a significant obstacle in its adoption was Atesman's failure to disclose the statistical variables used in the recalibration process. This gap was later addressed in the work of <cit.>, which also assigned appropriate grade levels, thus facilitating its practical use in the Turkish educational context. Not long after the adaptation, <cit.> introduced an important refinement, akin to the approach taken in the SMOG formula. They posited that the impact of polysyllabic words on text complexity is distinct from the total number of syllables present in the text. Accordingly, they included the counts of polysyllabic words (those with 3-, 4-, and 5+ syllables). <cit.> encountered inconsistencies when applying the Gunning FOG Index to Turkish texts which led to the development of their adaptation. The limitations are mainly due to the subjective nature of the formula in identifying complex words and concepts, which contrasts with other formulas that use easier-to-identify criteria such as syllable counts. Readability assessment has found practical applications in several areas in Turkish, particularly in the fields of medicine and education. For instance, researchers have used the Flesch-Kincaid and Atesman readability formulae to assess the readability of anaesthesia consent forms in Turkish hospitals, which led to valuable insights into how these documents could be optimised for better comprehension <cit.>. In the realm of education, readability studies have been employed to evaluate the complexity of textbooks, thereby ensuring that these crucial learning materials are appropriate for the targeted student age group. For example, research has been conducted to determine the readability levels of Turkish tales in middle-school textbooks, providing insights that could potentially enhance the quality of education by aligning learning materials with students' comprehension abilities <cit.>. While traditional readability formulas have significantly contributed to the field of readability assessment, they are not without their limitations. They often rely heavily on surface-level text features, such as word and sentence length, and fail to account for deeper linguistic and cognitive factors that influence readability <cit.>. Readability formulae have inherent limitations that can affect their accuracy and applicability. Given the unique phonetic attributes, sentence formation patterns, and mean syllable length in each language, each language requires its own calibrated readability formula. The validity of studies employing readability formulae calibrated for the English language to evaluate texts in other languages remains questionable. In practice, applying an English-calibrated formula to Turkish texts may result in an overestimation of readability levels. Indeed, most studies that have used this approach have reported inflated readability requirements <cit.>. Furthermore, the evolution of language over time may necessitate periodic recalibration of these formulas <cit.>. As language trends evolve and new words and phrases become commonplace, readability formulas must adapt to remain accurate and relevant. The research indicates that traditional readability measures display unreliable performance when applied to non-traditional document types, such as web pages <cit.>. Traditional readability formulas, despite their extensive use, have been criticised for their lack of wide linguistic coverage <cit.>. These formulas predominantly focus on superficial text features, largely ignoring other linguistic aspects that significantly contribute to text readability. Factors such as syntactic and semantic complexity, discourse structure, and other linguistic branches recognised by <cit.> which are integral to comprehending a text, remain largely unaccounted for in these traditional models. This narrow linguistic focus can lead to inaccuracies in readability assessment, especially when applied to languages or texts with diverse linguistic structures. These scores are relative measures of readability that should be interpreted in the context of the text's overall features and the target audience's reading ability. They are not absolute measures and treating them as such can result in a misunderstanding of the text's actual readability. Practitioner errors in applying readability formulas often stem from methodological shortcomings and misinterpretations <cit.>. The requirement of traditional measures for considerable text sample sizes introduces another impediment, even though the theoretical minimum size for a text sample has yet to be conclusively established. A common methodological error is the inappropriate sampling of text. Some studies might only consider a limited section of a text, such as the first 100 words, leading to skewed results, especially in scientific texts where complexity often increases later in the document. Similarly, the selective assessment of text sections that do not accurately mirror the overall complexity of the text, like focusing solely on the introduction or conclusion, can misrepresent the readability level. In recent years, research in ARA has shifted from traditional linear models, which use simple metrics such as word and sentence length to estimate the reading level of a text, to fine-grained features <cit.>. These features often include machine learning models trained on a combination of word counts, lexical patterns, discourse analysis, morphology, and syntactic structures. There has been an emerging trend toward using neural models for ARA. These models have demonstrated the capacity to implicitly capture the previously mentioned features without the need for manual feature extraction <cit.>. <cit.> and <cit.> experimented with contextual embeddings of BERT <cit.> for the readability assessment task, achieving par or better results than feature-based approaches. However, both studies omitted cross-domain evaluation, leading to uncertainty about the extent to which language models rely on topic and genre information, as opposed to readability. Other studies have further explored various strategies to integrate linguistic features with transformer models, promoting a fusion of traditional and neural approaches <cit.>. The state-of-the-art results are currently being achieved by hybrid models that ensemble linguistic features with transformer-based models, highlighting the combined strength of traditional and modern approaches. § CORPUS Most widely used readability corpora include One Stop English (OSE) <cit.>, the WeeBit corpus <cit.> and the Newsela corpus <cit.>. While the majority of these benchmark datasets and corpora are predominantly available in English, there is a growing interest in the development of readability corpora in other languages. In the context of low-resource languages, limited access to digital text resources necessitates reliance on conventional learning materials, such as classroom materials and textbooks. There are currently no existing readability corpora available for Turkish. §.§ TUBITAK PopSci Magazine Readability Corpus Our corpus was constructed using popular science articles from TUBITAK Popular Science Magazines [<https://yayinlar.tubitak.gov.tr/>] spanning the period 2007 to 2022. The articles are openly published and made available for non-commercial redistribution and research purposes. We selected 2250 articles from three magazines, each catering to readers of different age groups. These magazines include Meraklı Minik (for ages 0-6), Bilim Çocuk (for ages 7+), and Bilim ve Teknik (for ages 15+). Accordingly, we consider the articles from these magazines as elementary, intermediate, and advanced level reading material. Our corpus is non-parallel and encompasses a diverse range of topics, including instructions for laboratory experiments and brief articles about recent scientific discoveries. This characteristic is similar to that of the WeeBit corpus <cit.>, which also includes articles from various topics and resources. Given that the articles in our corpus are written by experts and specifically tailored for distinct age groups, it can be appropriately regarded as an 'expert-annotated' corpus. We used a off-the-shelf pdf-to-text converter to extract the relevant article text and manually corrected the articles to ensure the conversion accuracy of Turkish characters and the layout integrity. Table <ref> displays descriptive statistics for the finalized corpus. As expected, the advanced texts display a greater average length compared to the elementary texts. However, the high standard deviation values for each level indicate that other factors beyond text length may have a significant impact on determining the reading level of a given text. We also performed a preliminary analysis on the three reading levels of the corpus using traditional formulae and showed the results in Table <ref>, presenting readability metrics Atesman, Cetinkaya-Uzun and Type-Token Ratio (TTR). Atesman and Cetinkaya readability scores decrease from one level to the next indicating that texts become more complex at higher reading levels. In contrast, the TTR score increases suggesting that texts become more diverse and less repetitive at higher reading levels. It should also be noted that the readability levels of the elementary-level articles in both formulas were not suitable for the intended age group and that the magazine's disclaimer states that certain articles may require the assistance of an adult or parent. Table <ref> presents examples of articles representing each of the three reading levels. § LINGUISTIC FEATURES In this study, we explore five subgroups of linguistic features from our Turkish readability corpus: traditional or surface-based features, syntactic features, lexico-semantic features, morphological features, and discourse features. We employ spaCy v3.4.0 <cit.> with the pre-trained tr_core_news_trf model[<https://huggingface.co/turkish-nlp-suite/tr_core_news_trf>] for the majority of general tasks, including entity recognition, POS tagging, and dependency parsing. We use the Stanford Stanza parser version 1.5.0 <cit.> for constituency parsing. §.§ Traditional Features (TRAD) Traditional or surface-based features are commonly used to predict the readability of Turkish texts, and we also adopt them as a baseline for our study. Specifically, we extract 7 traditional features, including Turkish adaptations of well-known readability formulas such as Atesman and Cetinkaya-Uzun, as well as average values of words and syllables. As noted by <cit.> in their evaluation of the Turkish readability formulae, the impact of the number of polysyllabic words on text complexity is different from that of the total number of syllables present in the text. Therefore, we also included the counts of polysyllabic words (3-, 4-, and 5+ syllables) as separate features in our analysis. §.§ Syntactic Features (SYNX) Syntactic properties have a significant impact on the overall complexity of a given text, which serves as an important indicator of readability. We extract an array of syntactic features that capture various dimensions of sentence structure. Phrasal and dependency type features: Reading abilities are related to the ratios involving clauses in a text <cit.>. We extract features based on noun and verb phrases at sentence and article levels. We integrate features based on the unconditional probabilities of their dependency-based equivalents <cit.>. These encompass various types of syntactic dependencies, including subject, direct object, and modifier, among others. Parse tree depth features: The depth and structure of dependency trees in a text can reflect the level of sentence complexity. Following this principle, we extract the average and maximum depths of the constituency and dependency tree structures present in the text <cit.>. Part-of-Speech features: Part-of-speech (POS) tags provide essential information about the syntactic function of words in sentences. Adapting the work of <cit.> and <cit.>, we include features based on universal POS tag counts. Such features offer insights into the distribution and usage of different word categories, adding another layer of syntactic information. §.§ Lexico-Semantic Features (LXSM) Lexico-semantic features are a set of linguistic attributes that can reveal the complexity of a text's vocabulary. These features can be used to identify specific words or phrases that may pose difficulty or unfamiliarity to readers <cit.>. Lexical Variation features: Secondary language acquisition research has found a correlation between the diversity of words within the same Part-Of-Speech (POS) category and the lexical richness of a text <cit.>. We extract noun, verb, adjective, and adverb variations, which represent the proportion of the respective category’s words to the total. Type Token Ratio (TTR) features: TTR is a commonly used metric to quantify lexical richness and has been widely employed in readability assessment studies. We compute five distinct variations of TTR from <cit.>. The standard TTR variations of a text sample are susceptible to the text length, which can introduce bias in the readability assessment. To address this limitation, we also consider the Moving-Average Type–Token Ratio (MATTR) <cit.>. The MATTR mitigates the length-dependency issue by calculating the TTR score within a moving window across the text. Psycholinguistic features: We adopted word frequencies obtained from the Turkish psycholinguistic database created by <cit.>. This resource was built from transcriptions of children's speech and corpora of children's literature, thus containing words commonly acquired during early development. It also includes words typically acquired during adulthood from a standard corpus. We extracted the average word and sentence frequency for both early and late-acquired words. We calculate features based on the average log10 values similar to the SubtlexUS corpus <cit.>. Word Familiarity features: Familiarity with specific words can greatly affect readability. Based on prior work on Italian <cit.> and French <cit.> readability studies, we assessed the vocabulary composition of the articles using a reference list of 1700 basic words essential for achieving elementary reading proficiency in Turkish. This list, a combination of the first 1200 words taught to children aged 0-6 <cit.> and a set of essential words from an open-access textbook[<https://www.turkishtextbook.com/most-common-words/>] for learning Turkish, provides a benchmark for vocabulary familiarity. We calculated the percentage of unique words (types) in the text based on this reference list, performed on a lemma basis. §.§ Morphological features (MORPH) Morphological complexity plays a significant role in readability assessment, particularly in languages that are morphologically richer than English such as German <cit.> and Basque <cit.>. In our study, we integrate the Morphological Complexity Index (MCI) from <cit.>. The MCI captures the variability of morphological exponents of specific parts-of-speech within a text by comparing word forms with their stems. We calculate MCI features for verbs, nouns, and adjectives, considering different sample sizes and sampling techniques with and without repetition. MCI has been leveraged in cross-lingual readability assessment frameworks, proving its applicability across languages with varying morphological structures <cit.>. However, these studies have not explored agglutinative languages such as Turkish and Hungarian. §.§ Discourse features (DISCO) The final group of features we examine are entity density features. The presence and frequency of entities within a text can significantly impact the cognitive load required for comprehension. Entities often introduce new conceptual information, thereby increasing the burden on the reader's working memory. This relationship between entities and readability was previously shown by <cit.>. § EXPERIMENTS We experiment with four different setups: trad-baseline (non-neural model with shallow features), modern-baseline (non-neural model with linguistic features), neural (pretrained transformer models), and hybrid (modern-baseline + neural). We use 10-fold cross-validation (10FCV) and evaluate our models using standard metrics such as accuracy, precision, recall, and macro F1-score. Specifically, we choose traditional learning algorithms such as Logistic Regression, Support Vector Machines, Random Forest and XGBoost as our baseline models. We perform a randomised search to explore a reasonable range of hyper-parameter values. We apply a grid search to identify the optimal combination of hyper-parameter values within this range. §.§ Non-Neural Models with Linguistic Features Given the lack of available baselines for the readability task in Turkish, our first objective is to establish a baseline for the readability task. This baseline (trad-baseline) is designed to be on par with traditional readability formulas and is reliant on shallow linguistic features such as sentence and word lengths. By establishing this baseline, we are effectively creating a benchmark that allows for meaningful comparison between the traditional readability formulas, which are the only available methods in readability assessment for Turkish. We expand our feature set and include a more diverse set of linguistic feature groups (modern-baseline). We are interested in the performance of individual features, but we also aim to identify the best-performing combinations when these features are assembled into linguistic groups. §.§ Neural Models We extend from the established usage of transformer-based models in readability assessment <cit.> and opt for the BERTurk model[<https://huggingface.co/dbmdz/bert-base-turkish-uncased>] for our analysis. We tested multiple learning rates and batch sizes to ascertain the optimal configuration for our task. Specifically, we examined the learning rates of [1e-5, 2e-5, 3e-5, 1e-4] and the batch sizes of [8, 16, 32]. Our final model used AdamW optimizer, linear scheduler with 10% warmup steps, batch size of 8, and learning rate of 3e-5. The sequence lengths of our input documents were all set to 512 tokens. We fine-tune our model for three epochs. §.§ Hybrid Model In our study, we experiment with a hybrid model approach that aims to leverage the strengths of both neural and non-neural models in an ensemble learning strategy. The premise behind the hybrid model is based on the observation that while neural models such as BERT have demonstrated robust performance across diverse tasks, they could still benefit from incorporating handcrafted linguistic features, which have been key components in traditional non-neural models <cit.>. Our hybrid model takes a straightforward approach similar to that of <cit.> and <cit.>. It combines the soft label predictions generated by the neural model with handcrafted features. These features are then used as input to a non-neural (Random Forest) model. § RESULTS We compare the performance of traditional and modern baselines to illustrate the process of arriving at the best-performing model. The process of feature and model selection for the baseline models was carried out based on the results obtained from different combinations. §.§ Baseline: Feature and Model Evaluation Through evaluation of four distinct models, namely Support Vector Machines (SVM), Random Forest (RandomF), Logistic Regression (LogR), and XGBoost, were assessed using the combination of five different linguistic groups: traditional (TRAD), lexico-semantic (LXSM), syntactic (SYNX), morphological (MORPH), and discourse (DISCO) features. Table <ref> provides a comparative view of these models' performance when trained using the full combination. Among the four models evaluated, the Random Forest model delivered the highest performance with 85.3%. Importantly, all of the linguistic groups used provide orthogonal or distinct information. Table <ref> demonstrates how each contributing linguistic group incrementally improves the accuracy of the Random Forest model. Their combined strength ultimately achieves the highest overall accuracy score. The varying levels of performance between different approaches is demonstrated in Table <ref>. The hybrid model, which combines the strengths of both traditional and neural methodologies, outperforms all other models, securing the highest values for accuracy, precision, recall, and F1 score. Following the hybrid model, the neural model performs the best. The neural model (BERT) demonstrates an enhanced ability to capture nuanced characteristics of text readability, exhibiting superior performance to the baseline models without any handcrafted linguistic features. The modern baseline, incorporating five different linguistic subgroups, achieves superior performance compared to the traditional baseline. This highlights the advantage of leveraging an extended set of linguistic features over merely relying on surface-level features typical of traditional readability formulae. § DISCUSSION §.§ Model Interpretation In order to gain insights into the significance of individual linguistic features within our best-performing model, the RF model, we utilised two well-established model interpretation techniques specifically designed for Random Forest models: Feature Permutation and Mean Decrease in Impurity (MDI) as shown in Figure 1 and 2. §.§ Feature Correlation We also considered model-independent analysis through Spearman correlation to gain additional perspective into the importance of features with respect to readability levels. Table <ref> presents the ten features with the highest Spearman correlation coefficients highlighting the significance of readability assessment. §.§ Lingustic Features The analysis of feature importance consistently highlights the significant role of simple measures such as average sentence length and polysyllable counts. These findings align with previous research, where it has been shown that even compared to more complex feature extraction methods, a simple measure such as sentence length can indirectly capture multiple linguistic aspects of readability. Furthermore, our analysis demonstrates that lexico-semantic features play a prominent role in determining readability. This is evident from the performance improvement observed when including LXSM linguistic feature set in the modern-baseline method. It indicates that while traditional features are indeed valuable, incorporating fine-grained information at the semantic and lexical level can lead to an even better understanding of overall readability. The consistent presence of the syntactic feature "mean tree depth" further supports the relationship between sentence length and syntactic complexity. The correlation between mean tree depth and mean sentence length suggests that the structural complexity captured by syntactic features aligns with the overall complexity of sentences. § CONCLUSION We introduced a new readability corpus based on popular science magazine articles, providing a valuable resource for future research in Turkish readability assessment. By exploring the effectiveness of linguistic features at different levels, we have demonstrated their superiority over traditional readability formulae and shallow-level features. Our findings emphasise the importance of incorporating fine-grained linguistic features, as they provide more comprehensive insights into the complexity of Turkish texts. We showed the potential of hybrid models that combine fine-grained features with neural models by leveraging the strengths of both linguistic features and state-of-the-art transformers. § ACKNOWLEDGEMENTS We would like to pay special thanks to Sefa Kalkan from Charles University for his contributions to the construction of the readability corpus. acl_natbib
http://arxiv.org/abs/2306.07248v3
20230612171556
Self-testing of semisymmetric informationally complete measurements in a qubit prepare-and-measure scenario
[ "Gábor Drótos", "Károly F. Pál", "Tamás Vértesi" ]
quant-ph
[ "quant-ph" ]
[email protected] MTA Atomki Lendület Quantum Correlations Research Group, Institute for Nuclear Research, P.O. Box 51, H-4001 Debrecen, Hungary 0000-0002-0900-5188 Institute for Nuclear Research, P.O. Box 51, H-4001 Debrecen, Hungary MTA Atomki Lendület Quantum Correlations Research Group, Institute for Nuclear Research, P.O. Box 51, H-4001 Debrecen, Hungary 0000-0003-4437-9414 Self-testing of semisymmetric informationally complete measurements in a qubit prepare-and-measure scenario Tamás Vértesi July 31, 2023 =========================================================================================================== Self-testing is a powerful method for certifying quantum systems. Initially proposed in the device-independent (DI) setting, self-testing has since been relaxed to the semi-device-independent (semi-DI) setting. In this study, we focus on the self-testing of a specific type of non-projective qubit measurements belonging to a one-parameter family, using the semi-DI prepare-and-measure (PM) scenario. Remarkably, we identify the simplest PM scenario discovered so far, involving only four preparations and four measurements, for self-testing the fourth measurement. This particular measurement is a four-outcome non-projective positive operator-valued measure (POVM) and falls in the class of semisymmetric informationally complete (semi-SIC) POVMs introduced by Geng et al. [Phys. Rev. Lett. 126, 100401 (2021)]. To achieve this, we develop analytical techniques for semi-DI self-testing in the PM scenario. Our results shall pave the way towards self-testing any extremal qubit POVM within a potentially minimal PM scenario. § INTRODUCTION POVMs in quantum theory. Measurement lies at the core of all physical sciences. In quantum mechanics, John von Neumann pioneered the concept of measurement by representing a projective measurement on a physical system by a complete set of orthogonal projectors on a Hilbert space <cit.>. However, in modern quantum theory, a more general measure called positive operator valued measure (POVM) describes the measurement <cit.>. A POVM defines a set of positive operators (M_i≽ 0) that sum up to the identity (∑_i M_i=), without the constraint of M_i^2=M_i as in the projective case. Non-projective POVMs offer advantages in quantum information, enhancing entanglement detection <cit.> and state discrimination <cit.>. They find useful applications in various quantum protocols such as quantum coin flipping, quantum money, and quantum cryptography <cit.>. A paradigmatic example of a non-projective POVM is known as a symmetric informationally complete (SIC) POVM, which comprises rank-one POVM elements <cit.>. A d-dimensional SIC can be interpreted both as a set of d^2 pure states {|ψ_i⟩}_i and as a POVM {M_i}_i with d^2 elements M_i=(1/d)|ψ_i⟩⟨ψ_i|. Each element corresponds to a distinct outcome, and they satisfy (M_i)=1/d for all i. Additionally, the elements of the POVM {M_i}_i satisfy the condition (M_iM_j)= |⟨ψ_i||ψ_j⟩|^2/d^2=1/d^2(d+1) for all i≠ j. Indeed, this condition ensures the completeness of the POVM, i.e., ∑_i M_i=. While the simplest explicit construction of a SIC POVM is in dimension two, it can be constructed in higher dimensions as well. In particular, it is conjectured that SIC POVMs exist in all finite dimensions d≥ 2 <cit.>, although a general proof is yet to be established. A comprehensive review of SIC POVMs can be found in Ref. <cit.>. In fact, SIC POVMs belong to the broader class of informationally complete (IC) measurements <cit.>. This class has the useful property that the probabilities for the outcomes of any measurement on a target system can be computed if the probability distribution for the outcomes of an IC measurement on the same target system is known. Within mathematics, SIC POVMs have deep connections with prominent open problems in algebraic number theory, including Hilbert's 12th problem <cit.>. On the other hand, their significance in physics arises from their key role in quantum information theory. They are exploited in a number of protocols such as optimal state tomography <cit.>, quantum key distribution <cit.>, entanglement detection <cit.>, dimension witnessing <cit.>, device-independent randomness generation <cit.>, and certification of measurement devices <cit.>. Recently, Geng et al. <cit.> introduced a broader class of POVMs, called semisymmetric informationally complete (semi-SIC) POVMs, which extends beyond SIC POVMs. Semi-SIC POVMs are derived from SIC POVMs by dropping the condition (M_i)=1/d for all outcomes i=(1,…,d^2) while prescribing the conditions (M_iM_j)=B on the elements of {M_i}_i, which are still required to be rank-one, for all i≠ j; it turns out that B ∈ (1/16,1/12] for d=2 and can take only a few discrete values for d≥ 3. It should be noted that other generalisations of SIC POVMs are also possible. For example, Appleby <cit.> discusses the class of IC POVMs that are symmetric, but the elements M_i are not necessarily rank-one. In this paper, our focus is on self-testing measurements from the class of semi-SIC POVMs in dimension two within the so-called prepare-and-measure (PM) protocol <cit.>. In the context of self-testing, we will briefly introduce the most stringent device-independent (DI) scenario, and subsequently address the semi-device-independent (semi-DI) PM scenario, which will be the subject of our analysis. Self-testing of quantum systems. Self-testing of quantum systems is a recent approach in quantum information, which is partly related to security and robustness issues. It attempts to formulate protocols and experiments in a device-independent (DI) way <cit.>. The concept of self-testing within the DI scenario was originally proposed by Mayers and Yao <cit.>; it means that the characterisation of the preparation and measurement apparatuses are solely based on the observed measurement statistics. Importantly, no assumptions are made about the internal functioning of the devices, made possible by exploiting the Bell nonlocal property of multipartite quantum correlations <cit.>. A popular relaxation of the DI approach is the so-called semi-device-independent (semi-DI) setting, where certain additional assumptions are made about the devices <cit.>. These physical assumptions may involve bounds on the overlap <cit.>, mean energy <cit.>, entropy <cit.> of the prepared states, or may be based on the response of the physical systems to spatial rotations <cit.>. In the standard PM scenario, however, it is customary to assume an upper bound on the dimension of the communicated quantum system <cit.>. Let us now summarise the self-testing results that have been achieved in the literature within a PM scenario. Essentially, three main tasks have been demonstrated: self-testing of (i) quantum states, (ii) projective measurements, and (iii) non-projective POVM measurements. It is worth noting that most of these results are related to the n→ 1 quantum random access code (QRAC) scenario <cit.>. This scenario can be considered as a special case of the PM scenario, where the goal is to encode n digits into a single message qudit, and the receiver attempts to extract one of the digits. In the case of self-testing of quantum states, a specific set of four qubit states, namely {|0⟩,|1⟩,|+⟩,|-⟩} has been shown to be self-testable within the 2→ 1 QRAC scenario <cit.>. Furthermore, for the 3→ 1 QRAC, it has been proven that the self-testing conditions can certify that the eight prepared states correspond to Bloch vectors forming a cube on the Bloch sphere. Regarding the self-testing of projective measurements, the biased QRAC scenario in Refs. <cit.> could self-test any pair of incompatible qubit projective measurements. In fact, these references could even further self-test certain triples of qubit projective measurements, including those corresponding to three mutually unbiased bases <cit.>. Ref. <cit.> uses the 2→ 1 QRAC protocol to self-test a non-trivial pair of qutrit measurements. These results have subsequently been extended to higher dimensional systems in a more didactic manner. Specifically, Ref. <cit.> uses QRAC with a higher-dimensional message to self-test a pair of d-dimensional measurements based on mutually unbiased bases for arbitrary d (see also <cit.>). In the case of self-testing non-projective measurements, references <cit.> achieve self-testing of an extremal four-outcome qubit POVM known as the qubit SIC POVM. To this end, Ref. <cit.> uses the 3→ 1 QRAC, while Ref. <cit.> employs a truncated version of the 3→ 1 QRAC. Additionally, Ref. <cit.> robustly self-tests d-dimensional SIC POVMs, and Ref. <cit.> self-tests measurements compounded by multiple SICs. Furthermore, Pauwels et al. <cit.> recently certified a real-valued qubit POVM in a minimal scenario. However, this task is weaker than self-testing since it does not establish a connection to a target POVM. Let us also mention a more recent work which provides a family of linear witnesses whose maximum value self-tests arbitrary ensembles of pure states and arbitrary sets of extremal POVMs in PM scenarios of arbitrary Hilbert space dimension <cit.>. It should be noted, however, that the number of preparations and measurements in this work does not in general define a minimal PM scenario. Organisation of the paper. In this research, we present novel analytical methods for self-testing members of the one-parameter family of non-projective qubit measurements introduced by Geng et al. <cit.>, the qubit semi-SIC POVMs, in a semi-DI manner. To this end, we focus on a PM scenario, assuming a two-dimensional bound on the Hilbert space. Firstly, in Section <ref>, we define the qubit semi-SIC POVMs, characterised by the continuous parameter B; we follow a formulation equivalent to that of Geng et al. <cit.>. Moving on, in Section <ref>, we define the general framework for self-testing quantum states and measurements within the qubit PM scenario. The specific setup we employ to self-test non-projective measurements is adopted from Ref. <cit.> and consists of two stages which are applied in our study as follows. (i) In Section <ref>, we construct a two-parameter (c_1,c_2) witness matrix w with the purpose of self-testing four specific qubit states, parameterised by c_1 and c_2. These states are shown to be self-tested by the maximum value of the witness if their Bloch vectors correspond to those associated with a semi-SIC POVM, ensured by an appropriate choice of c_1 and c_2 as a function of the parameter B of the semi-SIC POVM, as described in Section <ref>. (ii) Next, in Section <ref>, it is shown that we can achieve self-testing of our particular semi-SIC POVM using the self-tested states mentioned above. Importantly, all of our results are derived analytically and applicable in the noiseless scenario. Finally, we conclude the paper with a discussion in Section <ref>, where we also present open problems related to the minimal configuration and the noisy version of the PM scenario. Our analytical approach shall serve as a stepping stone towards self-testing any extremal qubit POVM in a minimal PM scenario. § THE QUBIT SEMI-SIC POVM In this section, we will discuss the qubit semi-SIC POVM and its representation in terms of Bloch vectors. Note that if {E_i}_i=1^o defines a qubit POVM with o outcomes (that is ∑_i^o E_i=_2 and E_i≽ 0; in what follows, we shall omit the index 2 for ), then it can be expressed as: E_i=a_i(+h⃗_i·σ⃗) for i=(1,…,o), where |h⃗_i|≤ 1, a_i>0, and σ⃗=(σ_x,σ_y,σ_z) is the vector of Pauli matrices. The condition ∑_i=1^o E_i= is equivalent to ∑_i=1^o a_i=1 and ∑_i=1^o a_ih⃗_i=0. We can consider a POVM equivalent to another POVM up to certain unitary or antiunitary transformations as follows. If {E_i}_i defines a qubit POVM, than the elements {UE_iU^†}_i also define a valid qubit POVM, where U is an arbitrary 2× 2 unitary or anti-unitary operator. The following set of POVMs is equivalent to the semi-SIC POVMs constructed in Ref. <cit.> and given by Eqs. (9-10) in that reference. We have chosen the form below, because the symmetries are more transparent in this representation. Namely, our semi-SIC POVM has the following elements: E_1 =a_-/2(+h⃗_1·σ⃗), E_2 =a_-/2(+h⃗_2·σ⃗), E_3 =a_+/2(+h⃗_3·σ⃗), E_4 =a_+/2(+h⃗_4·σ⃗), where a_±=(1±√(1-12B))/2, and we choose the unit vectors h⃗_i as follows: h⃗_1 =(r_-,r_-,q_-), h⃗_3 =(-r_+,r_+,-q_+), h⃗_2 =(-r_-,-r_-,q_-), h⃗_4 =(r_+,-r_+,-q_+), where r_±=√((1-q_±^2)/2) and q_±=√(B)/a_± (due to normalisation). It can be readily checked that (E_iE_j)=B for all i≠ j, and (E_1)=(E_2)=a_-, (E_3)=(E_4)=a_+. On the other hand, a_-(h⃗_1+h⃗_2)+a_+(h⃗_3 + h⃗_4)/2=a_-q_–a_+q_+=0, hence {E_i}_i indeed define a valid POVM with parameter B. This is the so-called semi-SIC POVM. The allowed range of the parameter B is B∈(1/16,1/12]. For B=1/16 the first two elements are E_1=E_2, and therefore the POVM is not informationally complete. Note that for B=1/12 we have a_±=1/2 and q_±=1/√(3)=r_±, that is the four h⃗_i unit vectors define the four corners of a regular tetrahedron, which corresponds to the qubit SIC POVM. Note that we have the following six edges h⃗_i,j=h⃗_i-h⃗_j from the h⃗_i vectors in (<ref>): h⃗_12 = (2r_-,2r_-,0), h⃗_13 = (r_-+r_+,r_–r_+,q_-+q_+), h⃗_14 = (r_–r_+,r_-+r_+,q_-+q_+), h⃗_23 = (r_+-r_-,r_++r_-,q_-+q_+), h⃗_24 = (-r_–r_+,-r_-+r_+,q_-+q_+), h⃗_34 = (-2r_+,2r_+,0), where we can see that the length of the four edges h⃗_13, h⃗_14, h⃗_23 and h⃗_24 is the same. The other two (h⃗_12 and h⃗_34) are orthogonal to each other. This object is called a digonal disphenoid <cit.>. The Bloch vectors spanning this digonal disphenoid are illustrated in Fig. <ref> for B = 1/15 and B = 1/12. In the latter case, corresponding to a SIC POVM, the digonal disphenoid is a regular tetrahedron. We call a POVM extremal if it cannot be decomposed as a convex mixture of other POVMs. Extremal qubit POVMs can have o=2,3 or 4 outcomes. In the case of four outcomes (o=4), an extremal qubit POVM must have four unit vectors h⃗_i of which no choice of three lies in the same plane <cit.>. Only extremal POVMs can be self-tested, as the statistics of nonextremal POVMs can be simulated by stochastically implementing extremal POVMs (more on this in the next section). For the parameter B in the range B∈(1/16,1/12], the unit vectors in (<ref>) defining the vertices of the digonal disphenoid are such vectors, making qubit semi-SIC POVMs possible candidates for self-testing. § SELF-TESTING IN THE QUBIT PM SCENARIO In this section, we will define the concept of self-testing quantum states and measurements in the qubit PM scenario. It is important to note that the PM scenario differs from Bell scenarios in two key points. First, in the PM scenario, the parties are communicating and thus there is no space-like separation between them. Second, unlike a Bell scenario, the PM scenario does not involve entanglement. The PM scenario can be modelled by two separated parties, say Alice and Bob, as follows (see the scenario shown in Fig. <ref>). Upon receiving an input x ∈{1,…,m}, Alice prepares a state ρ_x and sends it to Bob. Bob also receives an input y ∈{1,…,n} and performs a measurement depending on y described by the operators {M_b|y}_by, where b∈{0,…,o-1} or b∈{1,…,o} denotes the outcome of the measurement (the choice for indexing may depend on y). It is worth mentioning that the value of o may also depend on y. This setup generates a probability distribution given by 𝒫(b|x,y) = (ρ_x M_b|y). The goal of self-testing is to characterise a quantum system based only on the observed data, i.e., represented by (<ref>). Self-testing means to infer from the above statistics that the preparations and measurements, denoted by {ρ_x,M_b|y}, are equivalent to some reference or target set {ρ_x',M_b|y'} up to a known transformation (referred to as an isometry). A self-test of the target set can be achieved using a witness 𝒲, which is a linear function of the probability distribution 𝒫(b|x,y): 𝒲 = ∑_x,y,b w_xy^(b)𝒫(b|x,y). Our objective is to construct a witness 𝒲 such that its maximum value Q attainable with qubit systems self-tests the target states and measurements {ρ_x',M_b|y'}. Note that we can self-test only pure states (i.e., ρ'_x=|ψ'_x⟩⟨ψ'_x|) and extremal POVMs. This limitation arises from the fact that the maximum value of the linear witness 𝒲 for a given dimension d can always be achieved with pure states and extremal measurements (see the proof of Lemma 1 in Ref. <cit.> regarding the purity of the states; the proof is analogous for the extremality of the measurements). Considering that 𝒫(b|x,y)=𝒫(b|x,y)^*, we have 𝒫(b|x,y)=(ρ_x M_b|y)=(ρ_x^* M_b|y^*). Additionally, since a self-tested state must be pure, any probability 𝒫(b|x,y) obtained using {|ψ_x⟩,M_b|y} can also be obtained with either {|ψ_x⟩^*,M^*_b|y} or {U|ψ_x⟩,UM_b|yU^†}, where U is an arbitrary 2× 2 unitary. Note that with an important additional assumption on the preparations and measurements that they are independent and identically distributed (i.i.d. case), Ref. <cit.> could show that even mixed qubit states can be self-tested, which is an impossible task in our non-i.i.d. case. Let us now examine how the above statements can be translated to the Bloch representation of states and measurements. Namely, we represent qubit states in the Bloch form as follows: ρ_x=+m⃗_x·σ⃗/2, where the Bloch vector m⃗_x has a unit length |m⃗_x|=1, indicating a pure state. We then consider the Bloch vectors h⃗_b of a generic four-outcome qubit POVM defined by (<ref>). In this case, we have the following relationships between the specific vectors (m⃗_x, h⃗_b) and the target vectors (m⃗'_x, h⃗'_b): m⃗_x=Rm⃗'_x, h⃗_b=Rh⃗'_b or m⃗_x=-Rm⃗'_x, h⃗_b=-Rh⃗'_b for all x and b=(1,…,4), where R is an arbitrary three-dimensional rotation matrix. The objective of this paper is to prove, up to an isometry ± R, the equivalence between the Bloch vectors of the target POVM {M'_b|y} and those of the semi-SIC POVM defined by (<ref>) when the witness (<ref>) achieves its maximum value Q during the PM protocol. § SELF-TESTING SPECIFIC QUBIT STATES IN THE PM SCENARIO Below we discuss in detail a PM experiment, which involves two-dimensional (i.e., qubit) systems. In the first part of the self-testing problem, our goal is to self-test four specific pure qubit states. Accordingly, we will restrict our considerations to four preparations (m = 4), three measurement settings (n = 3) with two outcomes per setting (o = 2), and take ρ_x to be a generic qubit. This implies a density operator of the form (<ref>) for the prepared states, and we write M_b|y in the most general form of a two-outcome POVM element, M_b|y = (1+(-1)^bμ_y) +(-1)^bv⃗_y^(b)·σ⃗/2 ; for the sake of completeness, we allow for any Bloch vectors m⃗_x and v⃗_y^(b) with lengths |m⃗_x| ≤ 1 and |v⃗_y^(b)| ≤ 1 and any parameters μ_y ∈ [-1,1] except that we require (1+μ_y)v⃗_y^(0) = (1-μ_y)v⃗_y^(1) to be satisfied. See Fig. <ref> for this setup. For the aim of self-testing, we define the witness according to (<ref>) where, in order to be able to self-test semi-SIC POVMs later, we choose the matrices w^(b) as w^(0) = -w^(1) = w = [r] c_1 1 1 c_1 -1 -1 -c_2 1 -1 -c_2 -1 1 , where c_1 and c_2 are arbitrary real parameters. Note that 𝒲 = ∑_x,y w_xy E_xy, where E_xy is the expectation value of an observable that takes on the values 1 and -1 upon the measurement outcomes b = 0 and b = 1, respectively. The prepared states ρ_x can only be self-tested if m⃗_x, v⃗_y^(b) and μ_y maximise 𝒲. We now assume μ_y = 0 for all y, in which case (<ref>) simplifies to an operator describing a non-degenerate projective measurement: M_b|y = +(-1)^bv⃗_y ·σ⃗/2 , where v⃗_y = v⃗_y^(0) = v⃗_y^(1). From (<ref>) with (<ref>) and (<ref>), it turns out that 𝒲 = ∑_x,y w_xym⃗_x ·v⃗_y . Let us define u⃗_x = ∑_y w_xyv⃗_y , with which we can write 𝒲 = ∑_xm⃗_x ·u⃗_x . If u⃗_x ≠ 0 for all x, then 𝒲 is obviously maximised (taking into account that |m⃗_x| is maximally 1) by choosing m⃗_x = u⃗_x/|u⃗_x| , from which Q := max_m⃗_x,v⃗_y𝒲 = max_v⃗_y∑_x| u⃗_x| , where the equality is true even if u⃗_x = 0 for some x, in which case m⃗_x remains unspecified. Let us introduce Q_v = ∑_x | u⃗_x| , which is to be maximised in v⃗_y. In the main text, we follow a procedure that is straightforward to apply to more general witness matrices w; see Appendix <ref> for a simpler option specific to the form in (<ref>). For the procedure of the main text, we demonstrate in Appendix <ref> that Q_v cannot be maximal in v⃗_y if u⃗_x = 0 for any x, so that (<ref>) holds for maximising 𝒲: the optimal vectors m⃗_x are unit vectors and are parallel with the vectors u⃗_x. It can be similarly shown that the length of the optimal vectors |v⃗_y| must also be 1. Then, by (<ref>) and (<ref>), (<ref>) can be expanded as Q_v = √(c_1^2+2+2(c_1v⃗_1·v⃗_2+c_1v⃗_1·v⃗_3+v⃗_2·v⃗_3)) + √(c_1^2+2+2(-c_1v⃗_1·v⃗_2-c_1v⃗_1·v⃗_3+v⃗_2·v⃗_3)) + √(c_2^2+2+2(-c_2v⃗_1·v⃗_2+c_2v⃗_1·v⃗_3-v⃗_2·v⃗_3)) + √(c_2^2+2+2(c_2v⃗_1·v⃗_2-c_2v⃗_1·v⃗_3-v⃗_2·v⃗_3)) . Let us define γ_12 = v⃗_1·v⃗_2 , γ_13 = v⃗_1·v⃗_3 , γ_23 = v⃗_2·v⃗_3 . The maximum of (<ref>) is found where its derivatives with respect to γ_12, γ_13 and γ_23 vanish; this is demonstrated in Appendix <ref>. Substituting (<ref>) after differentiation we obtain c_1/|u⃗_1| - c_1/|u⃗_2| - c_2/|u⃗_3| + c_2/|u⃗_4| = 0 , c_1/|u⃗_1| - c_1/|u⃗_2| + c_2/|u⃗_3| - c_2/|u⃗_4| = 0 , 1/|u⃗_1| + 1/|u⃗_2| - 1/|u⃗_3| - 1/|u⃗_4| = 0 . Adding and subtracting (<ref>) and (<ref>) yields |u⃗_1| = |u⃗_2| , |u⃗_3| = |u⃗_4| , substituting which into (<ref>) results in |u⃗_3| = |u⃗_1| . That is, |u⃗_1| = |u⃗_2| = |u⃗_3| = |u⃗_4| =: l , which, according to (<ref>) and (<ref>)-(<ref>), means l^2 = c_1^2+2+2(c_1γ_12+c_1γ_13+γ_23) , l^2 = c_1^2+2+2(-c_1γ_12-c_1γ_13+γ_23) , l^2 = c_2^2+2+2(-c_2γ_12+c_2γ_13-γ_23) , l^2 = c_2^2+2+2(c_2γ_12-c_2γ_13-γ_23) . Subtracting (<ref>) from (<ref>) and (<ref>) from (<ref>) gives γ_12 = γ_13 , γ_12 = - γ_13 , that is, v⃗_1·v⃗_2 = 0 , v⃗_1·v⃗_3 = 0 . With these results, (<ref>)-(<ref>) simplify to l^2 = c_1^2+2+2γ_23 , l^2 = c_2^2+2-2γ_23 , from which v⃗_2·v⃗_3 = c_2^2-c_1^2/4 =: cos(2θ) . (<ref>), (<ref>) and (<ref>) imply that, up to an isometry, v⃗_1 = (1, 0, 0) , v⃗_2 = (0, cosθ, sinθ) , v⃗_3 = (0, cosθ, sin(-θ)) , from which, using (<ref>), u⃗_1 = (c_1, 2cosθ, 0) , u⃗_2 = (c_1, -2cosθ, 0) , u⃗_3 = (-c_2, 0, 2sinθ) , u⃗_4 = (-c_2, 0, -2sinθ) , and the corresponding m⃗_x vectors are obtained from (<ref>). Furthermore, substituting (<ref>), (<ref>) and (<ref>) to (<ref>), we obtain Q = 2 √(2(c_1^2+c_2^2+4)) . Measuring this 𝒲 value self-tests the states characterised by (<ref>)-(<ref>) (up to an isometry). Note that we have assumed μ_y = 0 for all y to come to this result; we show in Appendix <ref> that relaxing this assumption can only lead to smaller 𝒲 values if c_1 and c_2 are linked to the parameter B of a semi-SIC POVM as described in the next section. § CORRESPONDENCE BETWEEN THE SPECIFIC STATES AND SEMI-SIC POVMS For self-testing semi-SIC POVMs, it will be useful to relate the Bloch vectors of the states prepared by Alice to those of a semi-SIC POVM through linking c_1 and c_2 to the parameter B of a semi-SIC POVM. Note that the results in Section <ref> are more general, since they are valid for any states and measurements characterised by Bloch vectors with any real c_1 and c_2 in (<ref>)-(<ref>) such that the condition μ_y = 0 for all y maximises 𝒲. For establishing a relationship, what we require is that the dot products between the Bloch vectors m⃗_x of Alice's prepared states as given by (<ref>) with (<ref>)-(<ref>) (determined up to an isometry) be equal to the dot products between the Bloch vectors h⃗_i, i ∈{1,2,3,4}, of the semi-SIC POVM with parameter B, given in (<ref>): m⃗_i ·m⃗_j = h⃗_i ·h⃗_j for all i,j ∈1,2,3,4. Since |m⃗_i| = 1 in the case in question and also |h⃗_i| = 1 for all i ∈{1,2,3,4}, this requirement is equivalent to requiring the angles between the Bloch vectors to be identical. The dot products between the different vectors h⃗_i can be computed from (<ref>) and are given as h⃗_1 ·h⃗_2 = 1-15B+√(1-12B)/9B , h⃗_1 ·h⃗_3 = h⃗_1 ·h⃗_4 = h⃗_2 ·h⃗_3 = h⃗_2 ·h⃗_4 = -1/3 , h⃗_3 ·h⃗_4 = 1-15B-√(1-12B)/9B . Substituting these expressions into (<ref>) determines the values of c_1 and c_2 as a function of B up to a sign such that c_1 and c_2 have the same sign: c_1 = ±√(2(1-6B+√(1-12B))/24B-1) , c_2 = ±√(2(1-6B-√(1-12B))/24B-1) . With this choice of c_1 and c_2, the redundant expressions evaluate to identity. We thus conclude that with these values of c_1 and c_2, the Bloch vectors m⃗_x defined by (<ref>) with (<ref>)-(<ref>) (determined up to an isometry) also characterise a semi-SIC POVM. In the special case of a SIC POVM, corresponding to B = 1/12 <cit.>, (<ref>)-(<ref>) give c_1 = c_2 = ± 1. According to (<ref>), (<ref>)-(<ref>) also imply Q = 24 √(B/24B-1) , which becomes Q = 4√(3) for a SIC POVM. We show in Appendix <ref> that μ_y ≠ 0 for any y can only lead to smaller values of 𝒲 than in (<ref>), so that μ_y = 0 for all y is a justified assumption for maximising 𝒲. § SELF-TESTING SEMI-SIC POVMS For the second part of the self-testing problem, we provide, as an extension, a fourth option for Bob's input, upon which he performs a measurement described by a POVM consisting of elements M_b|y=4, b ∈1,2,3,4, acting on the two-dimensional Hilbert space; see Fig. <ref>. The operator M_b|y=4 corresponding to each result b is described by a Bloch vector n⃗_b and a coefficient λ_b: M_b|y=4 = λ_b ( +n⃗_b·σ⃗) . Since the operators {M_b|y=4}_b constitute a POVM, we have that ∑_b=1^4 M_b|y=4 = , from which the coefficients λ_b uniquely follow once the vectors n⃗_b are given and are not co-planar (taking into account that the Pauli matrices and the identity matrix are linearly independent). Therefore, the POVM in question is fully characterised by the Bloch vectors n⃗_b unless they lie in the same plane, in which case the POVM cannot be self-tested. We now require that the probability of measuring the xth result when Alice prepares the xth state be zero; according to Born's formula, this means 𝒫(b=x|x,y=4) ≡(ρ_x M_x|y=4) = 0 . Substituting (<ref>) and (<ref>), this condition can be re-written as λ_x( 1+m⃗_x·n⃗_x) = 0 , from which, assuming naturally that λ_x≠ 0, it follows that n⃗_x = - m⃗_x with |n⃗_x| = |m⃗_x| = 1. What we obtained implies the following: if we extend the witness (<ref>), following Ref. <cit.>, as 𝒲' = 𝒲 - k ∑_x=1^4 𝒫(b=x|x,y=4) for some positive constant k (which can be chosen arbitrarily and would be relevant in a noisy setting <cit.>), it can take the value 𝒲 only if (<ref>) holds. As a consequence, if the vectors m⃗_x correspond to the Bloch vectors of a semi-SIC POVM (as defined by (<ref>) with (<ref>)-(<ref>) and (<ref>)-(<ref>), up to an isometry; see Section <ref>), then the operators {M_b|y=4}_b must also constitute a semi-SIC POVM: in particular, the one with the opposite Bloch vectors. This semi-SIC POVM is thus self-tested by measuring the value given by (<ref>) as 𝒲' as defined by (<ref>). § DISCUSSIONS In this study, our focus was on the self-testing of the members of a particular one-parameter family of four-outcome non-projective qubit measurements (the semi-SIC POVMs) in the PM scenario. This qubit PM scenario provides a natural framework for our investigation. Previous research has successfully self-tested a qubit SIC POVM <cit.>, as well as higher-dimensional SIC POVMs <cit.>. More recently, there have been results in self-testing compounds of SIC POVMs <cit.> and arbitrary extremal POVMs of arbitrary Hilbert space dimension <cit.>. We note here that in the so-called one-sided DI framework <cit.>, which utilises an alternative assumption within the semi-DI approach, results on the task of self-testing arbitrary extremal POVMs could also be obtained <cit.>. However, in that scenario, the resource used for self-testing is EPR-steering <cit.>. In our analysis, we focused on the noiseless setup and approached the self-testing problem in the PM scenario analytically. It would certainly be useful to develop a noise-robust methodology. Moreover, it would be valuable to explore self-testing of non-projective measurements under different assumptions than those considered in our work. See e. g. Refs. <cit.> for imposing other assumptions on the mediated particles in the PM scenario. As we conclude this paper, we would like to highlight some open problems that remain in the qubit case. First of all, it would be intriguing to develop a self-testing method in the PM setup for the most general class of extremal four-outcome qubit POVMs, using as few preparations as possible. In particular, can we self-test any extremal qubit POVM using only four preparations and four measurements? If so, is this the minimal setup, or could there exist smaller setups that can be used for self-testing? We do know that a lower bound is given by four preparations and two measurement settings, one with two outcomes and the other with four outcomes. Note that there is a strong hint that a real-valued three-outcome qubit POVM can be self-tested using only two measurement settings <cit.>. Our expectation is that the answers to these open questions will lead to further developments in this area. § ACKNOWLEDGEMENTS We would like to thank Mateus Araújo, Sébastien Designolle, Miguel Navascués and Armin Tavakoli for their valuable input and enlightening discussions. We acknowledge the support of the EU (QuantERA eDICT) and the National Research, Development and Innovation Office NKFIH (No. 2019-2.1.7-ERA-NET-2020-00003). § FINDING THE GLOBAL MAXIMUM VIA THE RELATIONSHIP BETWEEN ARITHMETIC AND QUADRATIC MEANS In order to maximise (<ref>) in v⃗_y (regardless of whether the vectors m⃗_x are well defined or not), we shall make use of the relationship ∑_i=1^N p_i ≤√(N(∑_i=1^N p_i^2)) between arithmetic and quadratic means of real numbers p_i, i ∈{1,…,N}, where equality holds if and only if p_i = p_j for all i and j. In particular, we substitute p_i = | u⃗_i| for i ∈{1,2,3,4}. It then turns out from (<ref>) with (<ref>) that Q_v ≤ 2 √(2[(c_1^2+c_2^2)v⃗_1^2+2(v⃗_2^2+v⃗_3^2)]) . The right-hand side of (<ref>) is obviously maximised by |v⃗_y| = 1 for all y. This maximum can also be reached by the left-hand side by requiring |v⃗_y| = 1 as well for all y and | u⃗_i| = | u⃗_j| for all i,j. It is easy to see that the latter condition leads to (<ref>)-(<ref>) and (<ref>) of the main text; since these requirements allow reaching the maximum of an upper bound of Q_v, they also maximise Q_v itself. The reason for which these requirements allow reaching the maximum of the upper bound in question, i.e., the expression on the right-hand side of (<ref>), is that this right-hand side is independent of the dot products of the vectors v⃗_y, which is a special property of the witness matrix w in (<ref>). Note that (<ref>)-(<ref>) and (<ref>) lead to well-defined vectors u⃗_x ≠ 0, (<ref>)-(<ref>) of the main text (up to an isometry), so that (<ref>) also holds. This implies that the vectors m⃗_x maximising the witness 𝒲 in (<ref>) have unit length, i.e., they describe pure states. § PROOF ABOUT THE CONFIGURATION OF THE OPTIMAL VECTORS Let us consider the following expression: 𝒲=∑_x,yw_xym⃗_x·v⃗_y , where w_xy is now any matrix having at least one nonzero element in each of its rows and columns. We are interested in finding vectors |m⃗_x|≤ 1, |v⃗_y|≤ 1 maximising the value of 𝒲. First let us choose any set of vectors v⃗_y satisfying the conditions |v⃗_y|≤ 1. Then from 𝒲=∑_xm⃗_x·(∑_y w_xyv⃗_y) it is easy to see that for the chosen set of v⃗_y the optimal m⃗_x is parallel with ∑_y w_xyv⃗_y and normalised (i.e., its length is maximal) whenever |∑_y w_xyv⃗_y|≠ 0, otherwise m⃗_x is undefined: 𝒲 does not depend on it. It is easy to see that max_m⃗_x𝒲 = ∑_x|∑_y w_xyv⃗_y| =: Q_v in either case. Now let us suppose that we have chosen v⃗_y such that at least one of the terms, say the x'th one, in the equation above is zero, i.e., |∑_y w_x'yv⃗_y|=0. Let w_x'y' be nonzero. Let us consider two modifications of v⃗_y': we either add the infinitesimal vector δ⃗ to it, or subtract δ⃗ from it. If |v⃗_y'|=1, both modifications are only allowed if v⃗_y'·δ⃗=0, in which case they correspond to infinitesimal rotations; in other cases this restriction is not necessary. Due to either of the modifications the term |∑_y w_x'yv⃗_y| will grow: its value will become |w_x'y'δ|. If there are other zero terms in (<ref>) they will either grow as well, or remain unchanged. The nonzero terms are analytical functions of the vectors v⃗_y, therefore their sum will either grow due to one of the two modifications (which correspond to opposite directions), or it will only change at second or higher order. Therefore, if (<ref>) contains a zero term, its value can always be increased by modifying a vector appropriately, so such a choice of the vectors v⃗_y cannot correspond to a maximum of Q_v. We have thus proven that there is no zero term for the optimal choice of v⃗_y. It follows from this conclusion that all of the vectors m⃗_x are well-defined unit vectors if the vectors v⃗_y are optimal. One can prove analogously that v⃗_y are also well-defined unit vectors if m⃗_x are optimal. § FINDING THE GLOBAL MAXIMUM BY DIFFERENTIATION We are interested in finding unit vectors v⃗_y maximising the following expression: Q_v =∑_x=1^M_m|∑_y=1^M_v w_xyv⃗_y| =∑_x=1^M_m√(s_x+2∑_y=2^M_v∑_y'=1^y-1t_xyy'γ_yy'), where s_x≡∑_y w_xy^2, t_xyy'≡ w_xyw_xy' and γ_yy'≡v⃗_y·v⃗_y' is an off-diagonal element of the Gram matrix of the set of v⃗_y vectors. The diagonal elements are γ(yy)=1. The set of the optimal v⃗_y vectors is never unique, they are determined only up to a global orthogonal transformation. It is their Gram matrix that may be uniquely determined. It is important to note that the method outlined here may only work if M_v, the number of vectors, is equal to the dimension of the vector space. The function Q_v as a function of γ_yy' maps the set of M_v× M_v positive semidefinite real matrices whose diagonal elements are one onto a subset of the real numbers: the domain of the function is where γ_yy' are elements of such a matrix. However, if we consider γ_yy' just as real numbers ignoring their definition, the formula on the rightmost side of (<ref>) is a well-defined real function of the variables γ_yy' wherever the expression under the square root is non-negative in each term. This region, which is called the natural or implied domain of the expression, is larger than the domain of Q_v. Let us refer to the function with the extended domain as Q_γ. The domain of Q_γ is contiguous and is bounded by the hyperplanes where any of the terms in the sum over x is zero (the expressions under the square roots are linear functions of the variables). The function Q_γ is analytical everywhere inside this whole region (but not at its boundaries). What we are going to prove is that if the partial derivatives of Q_γ with respect to all γ_yy' vanish somewhere, then Q_γ will have its global maximum there. If the condition for the partial derivatives holds in a single point, then the solution is unique. If the condition is satisfied outside the domain of Q_v, then although the expression does have its global optimum there, it is not the global optimum of Q_v. It cannot be reached with any set of v⃗_y vectors, so it does not correspond to the solution of the problem. However, if the condition holds at any point belonging to the domain, it is a solution even if the point is at the boundary of the domain, where the Gramian is zero and the v⃗_y are linearly dependent, because that point is the global maximum even in the larger implied domain. What we will prove now is that the Hessian of Q_γ is negative semidefinite everywhere inside its domain. For the sake of simplicity, let us denote the pair of indices yy' with a single index α, where 1≤α≤ M_v(M_v-1)/2. Then Q_γ=∑_x=1^M_m√(s_x+2∑_α=1^M_v(M_v-1)/2t_xαγ_α)≡∑_x=1^M_m l_x , The αβ element of the Hessian of Q_v is: ∂^2 Q_γ/∂γ_α∂γ_β =-∑_x=1^M_mt_xαt_xβ/l_x^3=-∑_x=1^M_mt_xα/l_x^3/2t_xβ/l_x^3/2 =-∑_x=1^M_mR^T_α xR_xβ, where R_xβ≡ t_xβ/l_x^3/2. As the Hessian is minus one times the product of the transpose of a matrix with the matrix itself, it is negative semidefinite. Consequently, the function is concave everywhere, so if all of its partial derivatives are zero somewhere, it has its global maximum there, indeed. However, as we have already mentioned, if this place happens to be outside the domain of Q_v, it does not correspond to the solution. It may also happen that w_xy is such that there is no place at all where all partial derivatives are zero. In these cases, the maximum is somewhere at the boundary of the domain, and one has to find some alternative method to determine the optimal solution for Q_v. The rank of the Hessian is equal to the rank of the matrix R. The rank of a matrix does not change if we multiply its rows by nonzero numbers. Therefore, this rank is the same as that of the matrix t. As the matrix t is independent of the variables, the rank of the Hessian is the same everywhere within the domain of Q_γ. Specifically, the Hessian is negative definite either everywhere or nowhere. If the first derivatives are zero only at one point, the Hessian must be negative definite there, consequently everywhere. It is obvious that this cannot happen if M_m, the number of rows of the matrix R (and t), is less than M_v(M_v-1)/2. Furthermore, if all partial derivatives are zero somewhere, the rows of matrix t must be linearly dependent, as ∑_x t_xα/l_x=0 must hold for all α. In this case the rank of the matrix t is at most M_m-1, therefore, the Hessian can only be negative definite, and the solution for the maximum be unique if M_m≥ M_v(M_v-1)/2+1. The same result has been derived in Ref. <cit.>. If the solution is not unique, w_xy is inappropriate for self-testing. § EXCLUDING GENERIC TWO-OUTCOME POVMS AS BOB'S MEASUREMENTS We demonstrate in this appendix that choosing μ_y ≠ 0 in (<ref>) for any y can only lead to smaller values of 𝒲 as defined by (<ref>) than (<ref>) if c_1 and c_2 are chosen according to (<ref>)-(<ref>), so that generic two-outcome POVMs can be excluded as Bob's measurements: choosing the latter to be projective and non-degenerate allows self-testing Alice's prepared states. With (<ref>) and (<ref>), and according to Born's rule, (<ref>) can be written as 𝒲 = ∑_x,y w_xy( ρ_x M_y ) , where M_y = M_0|y - M_1|y = μ_y + (1-|μ_y|) v⃗_y ·σ⃗ with v⃗_y = v⃗_y^(0) if μ_y ≤ 0, v⃗_y^(1) if μ_y ≥ 0. With (<ref>), (<ref>) can be further transformed as 𝒲 = ∑_x,y w_xy[ μ_y + (1-|μ_y|) v⃗_y ·m⃗_x ] = ∑_y[ μ_y W_y + (1-|μ_y|) v⃗_y ·( ∑_x w_xym⃗_x ) ] , where W_y = ∑_x w_xy . Similarly to the reasoning in Section <ref>, the maximum of (<ref>) is obtained by setting v⃗_y = z⃗_y/|z⃗_y| with z⃗_y = ∑_x w_xym⃗_x , under the condition that z⃗_y ≠ 0 for any y, and v⃗_y can be chosen arbitrarily for maximising 𝒲 if z⃗_y = 0 for some y. In either case, max_m⃗_x,v⃗_y,μ_y𝒲 = max_m⃗_x,μ_y∑_y[ μ_y W_y + (1-|μ_y|) |z⃗_y| ] . In case |z⃗_y| > |W_y| for a given y and for a given set of vectors m⃗_x, μ_y = 0 will give the maximal 𝒲 for that y in (<ref>); similarly, if |z⃗_y| < |W_y| (note that this may not occur if W_y = 0), then μ_y = W_y will correspond to the maximum. One can then consider these two options for μ_y for each y for which W_y ≠ 0, maximise 𝒲 in m⃗_x and v⃗_y for each configuration of the options, and compare the resulting values of 𝒲 for finding the maximal 𝒲 in μ_y. This is sufficient, because in case one takes an optimal set of vectors m⃗_x and v⃗_y resulting from this procedure, one will recover the maximum value of 𝒲 for that set of vectors even if |z⃗_y| equals to |W_y| for some y so that the value of the optimal corresponding μ_y is ill-defined. The fact that |z⃗_y| = |W_y| for a given y, which is relevant for self-testing, will be indicated by a direct check. Furthermore, |z⃗_y| = |W_y| for a given y and for an optimal set of vectors m⃗_x and v⃗_y cannot be the case if the maximal 𝒲 is not invariant under changing the value of the corresponding μ_y. We first notice that W_2 = W_3 = 0, so that μ_2 = μ_3 = 0 can be assumed for the search of the maximal 𝒲. The case of μ_1 = 0 is discussed in Section <ref> and leads to (<ref>) with (<ref>)-(<ref>) for m⃗_x. In this case, |z⃗_2| = |z⃗_3| = 4 √(cos^2θ/c_1^2+4cos^2θ + sin^2θ/c_2^2+4sin^2θ) = 8/√(2(c_1^2+c_2^2+4)) > 0 , so that |z⃗_2| ≠ |W_2| and |z⃗_3| ≠ |W_3|. Instead of checking the relation between |z⃗_1| and |W_1| for the optimal vectors corresponding to μ_1 = 0, we consider the alternative option μ_1 = W_1 for W_1 ≠ 0 and |μ_1| = 1 for W_1 = 0 (which is the case for c_1 = c_2) while keeping μ_2 = μ_3 = 0. For assessing this option, we return to (<ref>), which then takes the form 𝒲 = | ∑_x w_x1| + v⃗_2 ·( ∑_x w_x2m⃗_x ) + v⃗_3 ·( ∑_x w_x3m⃗_x ) = | ∑_x w_x1| + ∑_x m⃗_x ·( w_x2v⃗_2 + w_x3v⃗_3 ) . For the optimal choice of m⃗_x, given by (<ref>), this simplifies to 𝒲 = | ∑_x w_x1| + ∑_x | w_x2v⃗_2 + w_x3v⃗_3 | = 2 |c_1-c_2| + 2 | v⃗_2 + v⃗_3 | + 2 | v⃗_2 - v⃗_3 | . It is easy to see that this expression is maximised in v⃗_2 and v⃗_3 by choosing these unit vectors to be orthogonal; in this case, 𝒲 = 2 |c_1-c_2| + 4√(2) =: Q' . is obtained. For general values of c_1 and c_2, Q' can be greater than Q in (<ref>), which means that a degenerate measurement as Bob's first one can in fact be optimal. However, if we do not allow for arbitrary c_1 and c_2 in (<ref>) but substitute (<ref>)-(<ref>) linking them to the parameter B of a semi-SIC POVM, the only solution to Q = Q' in terms of B will be B = 1/16, lying just outside the domain B ∈ (1/16,1/12]. If we add that Q(B=1/12) = 4 √(3) > Q'(B=1/12) = 4 √(2) , we see that Q > Q' for all B ∈ (1/16,1/12]. We thus conclude that μ_1 = 0 is indeed the optimal choice for maximising 𝒲 whenever the witness matrix w is chosen for the purpose of self-testing a semi-SIC POVM. 10 holevo2003 Alexander S Holevo. “Statistical structure of quantum theory”. https://dx.doi.org/10.1007/3-540-44998-1Volume 67. Springer Science & Business Media.  (2003). peres1997book Asher Peres. “Quantum theory: concepts and methods”. https://dx.doi.org/10.1007/0-306-47120-5Springer.  (1997). nielsen2002 Michael A Nielsen and Isaac Chuang. “Quantum computation and quantum information”. https://dx.doi.org/10.1017/CBO9780511976667Cambridge University Press.  (2002). shang2018 Jiangwei Shang, Ali Asadian, Huangjun Zhu, and Otfried Gühne. “Enhanced entanglement criterion via symmetric informationally complete measurements”. https://dx.doi.org/10.1103/PhysRevA.98.022309Physical Review A 98, 022309 (2018). bae2019 Joonwoo Bae, Beatrix C Hiesmayr, and Daniel McNulty. “Linking entanglement detection and state tomography via quantum 2-designs”. https://dx.doi.org/10.1088/1367-2630/aaf8cfNew Journal of Physics 21, 013012 (2019). dieks1988 Dennis Dieks. “Overlap and distinguishability of quantum states”. https://dx.doi.org/10.1016/0375-9601(88)90840-7Physics Letters A 126, 303–306 (1988). peres1988 Asher Peres. “How to differentiate between non-orthogonal states”. https://dx.doi.org/10.1016/0375-9601(88)91034-1Physics Letters A 128, 19 (1988). bennett1992 Charles H Bennett. “Quantum cryptography using any two nonorthogonal states”. https://dx.doi.org/10.1103/PhysRevLett.68.3121Physical Review Letters 68, 3121 (1992). renes2004 Joseph M Renes, Robin Blume-Kohout, Andrew J Scott, and Carlton M Caves. “Symmetric informationally complete quantum measurements”. https://dx.doi.org/10.1063/1.1737053Journal of Mathematical Physics 45, 2171–2180 (2004). zauner2011 Gerhard Zauner. “Quantum designs: Foundations of a noncommutative design theory”. https://dx.doi.org/10.1142/S0219749911006776International Journal of Quantum Information 9, 445–507 (2011). fuchs2017 Christopher A Fuchs, Michael C Hoang, and Blake C Stacey. “The SIC question: History and state of play”. https://dx.doi.org/10.3390/axioms6030021Axioms 6, 21 (2017). dariano2004 GM d’Ariano, P Perinotti, and MF Sacchi. “Informationally complete measurements and group representation”. https://dx.doi.org/10.1088/1464-4266/6/6/005Journal of Optics B: Quantum and Semiclassical Optics 6, S487 (2004). appleby2017 Marcus Appleby, Steven Flammia, Gary McConnell, and Jon Yard. “SICs and algebraic number theory”. https://dx.doi.org/10.1007/s10701-017-0090-7Foundations of Physics 47, 1042–1059 (2017). wootters89 William K Wootters and Brian D Fields. “Optimal state-determination by mutually unbiased measurements”. https://dx.doi.org/https://doi.org/10.1016/0003-4916(89)90322-9Annals of Physics 191, 363–381 (1989). scott2006 Andrew J Scott. “Tight informationally complete quantum measurements”. https://dx.doi.org/10.1088/0305-4470/39/43/009Journal of Physics A: Mathematical and General 39, 13507 (2006). fuchs2003 Christopher A Fuchs and Masahide Sasaki. “Squeezing quantum information through a classical channel: measuring the "quantumness" of a set of quantum states”. Quantum Information & Computation 3, 377–404 (2003). url: <https://dl.acm.org/doi/abs/10.5555/2011544.2011545>. renes2005 Joseph M Renes. “Equiangular spherical codes in quantum cryptography”. Quantum Information & Computation 5, 81–92 (2005). url: <https://dl.acm.org/doi/abs/10.5555/2011608.2011616>. brunner2013dimension Nicolas Brunner, Miguel Navascués, and Tamás Vértesi. “Dimension witnesses and quantum state discrimination”. https://dx.doi.org/10.1103/PhysRevLett.110.150501Physical Review Letters 110, 150501 (2013). acin2016optimal Antonio Acín, Stefano Pironio, Tamás Vértesi, and Peter Wittek. “Optimal randomness certification from one entangled bit”. https://dx.doi.org/10.1103/PhysRevA.93.040102Physical Review A 93, 040102 (2016). andersson2018 Ole Andersson, Piotr Badziąg, Irina Dumitru, and Adán Cabello. “Device-independent certification of two bits of randomness from one entangled bit and Gisin's elegant Bell inequality”. https://dx.doi.org/10.1103/PhysRevA.97.012314Physical Review A 97, 012314 (2018). tavakoli2020self Armin Tavakoli, Massimiliano Smania, Tamás Vértesi, Nicolas Brunner, and Mohamed Bourennane. “Self-testing nonprojective quantum measurements in prepare-and-measure experiments”. https://dx.doi.org/10.1126/sciadv.aaw6664Science Advances 6, eaaw6664 (2020). mironowicz2019 Piotr Mironowicz and Marcin Pawłowski. “Experimentally feasible semi-device-independent certification of four-outcome positive-operator-valued measurements”. https://dx.doi.org/10.1103/PhysRevA.100.030301Physical Review A 100, 030301 (2019). Geng2021 Isabelle Jianing Geng, Kimberly Golubeva, and Gilad Gour. “What are the minimal conditions required to define a symmetric informationally complete generalized measurement?”. https://dx.doi.org/10.1103/PhysRevLett.126.100401Physical Review Letters 126, 100401 (2021). appleby2007symmetric DM Appleby. “Symmetric informationally complete measurements of arbitrary rank”. https://dx.doi.org/10.1134/S0030400X07090111Optics and Spectroscopy 103, 416–428 (2007). gallego2010device Rodrigo Gallego, Nicolas Brunner, Christopher Hadley, and Antonio Acín. “Device-independent tests of classical and quantum dimensions”. https://dx.doi.org/10.1103/PhysRevLett.105.230501Physical Review Letters 105, 230501 (2010). acin2007device Antonio Acín, Nicolas Brunner, Nicolas Gisin, Serge Massar, Stefano Pironio, and Valerio Scarani. “Device-independent security of quantum cryptography against collective attacks”. https://dx.doi.org/10.1103/PhysRevLett.98.230501Physical Review Letters 98, 230501 (2007). scarani2012device Valerio Scarani. “The device-independent outlook on quantum physics”. https://dx.doi.org/10.2478/v10155-012-0003-4Acta Physica Slovaca 62, 347–409 (2012). mayers2004 Dominic Mayers and Andrew Yao. “Self testing quantum apparatus”. Quantum Information & Computation 4, 273–286 (2004). url: <https://dl.acm.org/doi/abs/10.5555/2011827.2011830>. bell1964 John S Bell. “On the Einstein Podolsky Rosen paradox”. https://dx.doi.org/10.1103/PhysicsPhysiqueFizika.1.195Physics Physique Fizika 1, 195 (1964). brunner2014bell Nicolas Brunner, Daniel Cavalcanti, Stefano Pironio, Valerio Scarani, and Stephanie Wehner. “Bell nonlocality”. https://dx.doi.org/10.1103/RevModPhys.86.419Reviews of Modern Physics 86, 419 (2014). pawlowski2011semi Marcin Pawłowski and Nicolas Brunner. “Semi-device-independent security of one-way quantum key distribution”. https://dx.doi.org/10.1103/PhysRevA.84.010302Physical Review A 84, 010302 (2011). brask2017megahertz Jonatan Bohr Brask, Anthony Martin, William Esposito, Raphael Houlmann, Joseph Bowles, Hugo Zbinden, and Nicolas Brunner. “Megahertz-rate semi-device-independent quantum random number generators based on unambiguous state discrimination”. https://dx.doi.org/10.1103/PhysRevApplied.7.054018Physical Review Applied 7, 054018 (2017). van2017semi Thomas Van Himbeeck, Erik Woodhead, Nicolas J Cerf, Raúl García-Patrón, and Stefano Pironio. “Semi-device-independent framework based on natural physical assumptions”. https://dx.doi.org/10.22331/q-2017-11-18-33Quantum 1, 33 (2017). chaves2015device Rafael Chaves, Jonatan Bohr Brask, and Nicolas Brunner. “Device-independent tests of entropy”. https://dx.doi.org/10.1103/PhysRevLett.115.110501Physical Review Letters 115, 110501 (2015). jones2022 Caroline L Jones, Stefan L Ludescher, Albert Aloy, and Markus P Mueller. “Theory-independent randomness generation with spacetime symmetries”. https://dx.doi.org/10.48550/arXiv.2210.14811arXiv:2210.14811 (2022). tavakoli2018self Armin Tavakoli, Jędrzej Kaniewski, Tamás Vértesi, Denis Rosset, and Nicolas Brunner. “Self-testing quantum states and measurements in the prepare-and-measure scenario”. https://dx.doi.org/10.1103/PhysRevA.98.062307Physical Review A 98, 062307 (2018). farkas2019self Máté Farkas and Jędrzej Kaniewski. “Self-testing mutually unbiased bases in the prepare-and-measure scenario”. https://dx.doi.org/10.1103/PhysRevA.99.032316Physical Review A 99, 032316 (2019). divianszky2022 Péter Diviánszky, István Márton, Erika Bene, and Tamás Vértesi. “Certification of qubits in the prepare-and-measure scenario with large input alphabet and connections with the Grothendieck constant”. https://dx.doi.org/10.48550/arXiv.2211.17185arXiv:2211.17185 (2022). navascues2023 Miguel Navascués, Károly F. Pál, Tamás Vértesi, and Mateus Araújo. “Self-testing in prepare-and-measure scenarios and a robust version of Wigner's theorem”. https://dx.doi.org/10.48550/arXiv.2306.00730arXiv:2306.00730 (2023). nayak1999 Ashwin Nayak. “Optimal lower bounds for quantum automata and random access codes”. In 40th Annual Symposium on Foundations of Computer Science (Cat. No. 99CB37039). https://dx.doi.org/10.1109/SFFCS.1999.814608Pages 369–376. IEEE (1999). ambainis2002 Andris Ambainis, Ashwin Nayak, Amnon Ta-Shma, and Umesh Vazirani. “Dense quantum coding and quantum finite automata”. https://dx.doi.org/10.1145/581771.581773Journal of the ACM (JACM) 49, 496–511 (2002). ambainis2008quantum Andris Ambainis, Debbie Leung, Laura Mancinska, and Maris Ozols. “Quantum random access codes with shared randomness”. https://dx.doi.org/10.48550/arXiv.0810.2937arXiv:0810.2937 (2008). alves2023biased G Pereira Alves, N Gigena, and J Kaniewski. “Biased random access codes”. https://dx.doi.org/10.48550/arXiv.2302.08494arXiv:2302.08494 (2023). schwinger1960 Julian Schwinger. “Unitary operator bases”. https://dx.doi.org/10.1073/pnas.46.4.570Proceedings of the National Academy of Sciences 46, 570–579 (1960). bengtsson2007 Ingemar Bengtsson. “Three ways to look at mutually unbiased bases”. In AIP Conference Proceedings. https://dx.doi.org/10.1063/1.2713445Volume 889, pages 40–51. American Institute of Physics (2007). tavakoli2019enabling Armin Tavakoli, Denis Rosset, and Marc-Olivier Renou. “Enabling computation of correlation bounds for finite-dimensional quantum systems via symmetrization”. https://dx.doi.org/10.1103/PhysRevLett.122.070501Physical Review Letters 122, 070501 (2019). Tavakoli2020b Armin Tavakoli, Ingemar Bengtsson, Nicolas Gisin, and Joseph M. Renes. “Compounds of symmetric informationally complete measurements and their application in quantum key distribution”. https://dx.doi.org/10.1103/PhysRevResearch.2.043122Physical Review Research 2, 043122 (2020). pauwels2022 Jef Pauwels, Stefano Pironio, Emmanuel Zambrini Cruzeiro, and Armin Tavakoli. “Adaptive advantage in entanglement-assisted communications”. https://dx.doi.org/10.1103/PhysRevLett.129.120504Physical Review Letters 129, 120504 (2022). DD “Description of digonal disphenoid”. <https://polytope.miraheze.org/wiki/Digonal_disphenoid>. Accessed: 2023-03-18. dariano2005 Giacomo Mauro D'Ariano, Paoloplacido Lo Presti, and Paolo Perinotti. “Classical randomness in quantum measurements”. https://dx.doi.org/10.1088/0305-4470/38/26/010Journal of Physics A: Mathematical and General 38, 5979 (2005). ahrens2014 Johan Ahrens, Piotr Badziąg, Marcin Pawłowski, Marek Żukowski, and Mohamed Bourennane. “Experimental Tests of Classical and Quantum Dimensionality”. https://dx.doi.org/10.1103/PhysRevLett.112.140401Physical Review Letters 112, 140401 (2014). miklin2021 Nikolai Miklin and Michał Oszmaniec. “A universal scheme for robust self-testing in the prepare-and-measure scenario”. https://dx.doi.org/10.22331/q-2021-04-06-424Quantum 5, 424 (2021). supic2016 Ivan Šupić and Matty J Hoban. “Self-testing through EPR-steering”. https://dx.doi.org/10.1088/1367-2630/18/7/075006New Journal of Physics 18, 075006 (2016). sarkar2022 Shubhayan Sarkar, Debashis Saha, and Remigiusz Augusiak. “Certification of incompatible measurements using quantum steering”. https://dx.doi.org/10.1103/physreva.106.l040402Physical Review A 106, L040402 (2022). sarkar2023 Shubhayan Sarkar, Jakub J. Borkała, Chellasamy Jebarathinam, Owidiusz Makuta, Debashis Saha, and Remigiusz Augusiak. “Self-testing of any pure entangled state with the minimal number of measurements and optimal randomness certification in a one-sided device-independent scenario”. https://dx.doi.org/10.1103/physrevapplied.19.034038Physical Review Applied 19, 034038 (2023). Schrodinger1935 E. Schrödinger. “Discussion of probability relations between separated systems”. https://dx.doi.org/10.1017/s0305004100013554Mathematical Proceedings of the Cambridge Philosophical Society 31, 555–563 (1935). Wiseman2007 H. M. Wiseman, S. J. Jones, and A. C. Doherty. “Steering, entanglement, nonlocality, and the Einstein-Podolsky-Rosen paradox”. https://dx.doi.org/10.1103/physrevlett.98.140402Physical Review Letters 98, 140402 (2007). Vertesi2009 Tamás Vértesi and Károly F. Pál. “Bounding the dimension of bipartite quantum systems”. https://dx.doi.org/10.1103/PhysRevA.79.042106Physical Review A 79, 042106 (2009).
http://arxiv.org/abs/2306.04156v1
20230607051206
Spin Squeezing with Arbitrary Quadratic Collective-Spin Interaction
[ "Zhiyao Hu", "Qixian Li", "Xuanchen Zhang", "Long-Gang Huang", "He-bin Zhang", "Yong-Chun Liu" ]
quant-ph
[ "quant-ph" ]
These authors contributed equally to this work. These authors contributed equally to this work. [email protected] ^1 State Key Laboratory of Low-Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China ^2School of Physics, Xi'an Jiaotong University, Xi'an 710049, China ^3China Fire and Rescue Institute, Beijing 102202, China ^4Frontier Science Center for Quantum Information, Beijing 100084, China Spin squeezing is vitally important in quantum metrology and quantum information science. The noise reduction resulting from spin squeezing can surpass the standard quantum limit and even reach the Heisenberg Limit (HL) in some special circumstances. However, systems that can reach the HL are very limited. Here we study the spin squeezing in atomic systems with a generic form of quadratic collective-spin interaction, which can be described by the Lipkin-Meshkov-Glick(LMG) model. We find that the squeezing properties are determined by the initial states and the anisotropic parameters. Moreover, we propose a pulse rotation scheme to transform the model into two-axis twisting model with Heisenberg-limited spin squeezing. Our study paves the way for reaching HL in a broad variety of systems. Spin Squeezing with Arbitrary Quadratic Collective-Spin Interaction Yong-Chun Liu^1,4 July 31, 2023 =================================================================== APS/123-QED § INTRODUCTION Squeezed spin states (SSSs) <cit.> are entangled quantum states of a collection of spins in which the uncertainty of one spin component perpendicular to the mean spin direction is reduced below the standard quantum limit (SQL). Owing to its property of reduced spin fluctuations, it has a variety of applications in the study of many-body entanglement <cit.>, high-precision measurements <cit.>, and quantum information science <cit.>. Many methods have been proposed to realize spin squeezing, such as atom-light interaction <cit.> , and quantum nondemolition measurement <cit.>. One important way to deterministically generate spin squeezing is utilizing the dynamical evolution of squeezing interaction, which is accomplished via collective-spin systems with nonlinear interaction <cit.>. Typical squeezing interactions include one-axis twisting (OAT) interaction and two-axis twisting (TAT) interaction. The noise reduction of the TAT model can reach the Heisenberg limit (HL), but the physical realization of the TAT model is difficult. It is shown that the OAT model can be transformed into TAT model using repeated Rabi pulses <cit.>, but the more general cases with other types of quadratic collective-spin interaction are still unknown. Except for OAT and TAT interactions, the more general form of quadratic collective-spin interaction can be described by the Lipkin-Meshkov-Glick (LMG) model. The LMG model was first introduced in nuclear physics <cit.>, which provides a simple description of the tunneling of bosons between two degenerate levels and can thus be used to describe many physical systems such as two-mode Bose-Einstein condensates <cit.> or Josephson junctions <cit.>. A recent study shows that the LMG model could be used in generating spin squeezing and having 6.8(4) dB metrological gain beyond the Standard Quantum Limit with suitable time-reversal control in cavity QED <cit.>. However, it requires a tunable Hamiltonian by switching to another set of laser frequencies on the cavity, which is not always accessible. A more general investigation of the LMG model in spin squeezing is required and whether it could reach Heisenberg-limited noise reduction remains unknown. Here we study the spin squeezing properties in the LMG model with different anisotropic parameters. We find that initial state and the anisotropic parameter play important roles in the spin squeezing. We propose an implementable way to transform the LMG model into effective TAT model by making use of rotation pulses along different axes on the Bloch sphere, which gives a convenient way to generate efficient spin squeezing reaching the HL. We also analyze the influence of noises and find that our scheme is robust to fluctuations in pulse areas and pulse separations. The paper is organized as follows. In Sec. (<ref>), we first introduce the system model of the quadratic collective-spin interaction, which can be described by the LMG model. In Sec. (<ref>), we investigate the performance of spin squeezing in the LMG mode and present the optimal initial state for spin squeezing in the LMG model. In Sec. (<ref>), we prove that the designed rotation pulse method can transform the LMG model into effective TAT interaction. We also show that the method is robust to different noises according to numerical simulations. § THE SYSTEM MODEL We consider a system of mutually interacting spin-1/2 particles described by the following Hamiltonian: H = ∑_j<kχ_αβσ_α^jσ_β^k, where σ_α^j is the Pauli operator of the j-th spin and α,β∈{x,y,z}. The parameter χ_αβ characterize the strength of the interaction in different directions. To ensure the Hermicity of the Hamiltonian, we have χ_αβ = χ_βα. Here we have the assumption that the interactions between individual spins are the same. This assumption holds when there are all-to-all interactions rather than just dipole-dipole interactions, which is valid under some systems such as nuclear system <cit.>, Cavity QED <cit.>, ion trap <cit.>. Now we introduce the collective spin operators S_α = ħ/2∑_jσ_α^j. Let ħ = 1, using σ_ασ_β = i∑_γε_αβγσ_γ+δ_αβ, where ε_αβγ is the Levi-Civita symbol and δ_αβ is the Kronecker delta, Eq. (<ref>) becomes H = 2∑_α,β∈{x,y,z}χ_αβS_αS_β+H_0, where H_0 is a constant and can be neglected. H preserves the magnitude of the total spin S^2 = ∑_αS^2_α, namely, [H,S^2] = 0, which means S^2 is a constant. The Hamiltonian can be written as H = S^TA S, where A_αβ = 2χ_αβ. A is a real symmetric matrix, which means it can be diagonalized by a linear transformation: A = Q^-1D Q, in which Q is an orthogonal matrix and D is a diagonal matrix whose nonzero elements are eigenvalues of A. Let S̃ = QS, we can turn the Hamiltonian into the canonical form: H = ∑_α∈{x,y,z}χ_αS̃_α^2, For convenience, in the following we redefine the spin operator S_α by omitting the tilde. We select the corresponding S̃_̃α̃ of the largest χ_α as S_x and the minimum as S_z, i.e., χ_x≥χ_y≥χ_z. Using the relation ∑_αS^2_α = S^2, the transformed Hamiltonian reads H = (χ_x-χ_z)S_x^2+(χ_y-χ_z)S_y^2+S^2. Let χ = χ_x-χ_z, γ = (χ_y-χ_z)/(χ_x-χ_z) and ignore the constant term, we obtain the general form of the Hamiltonian of the LMG model H = χ(S_x^2+γ S_y^2) . Therefore, any system with Hamiltonian in the form of Eq. (<ref>) can be transformed to the standard form of the LMG model as Eq. (<ref>). What's worth mentioning is that we ignore the linear interaction between the spin and external magnetic field. The reason is that linear interaction itself can't generate spin squeezing, and the linear interaction could be easily canceled in the experimental system using suitable pulse sequences. Under the condition χ_x≥χ_y≥χ_z, we have 0≤γ≤1. Furthermore, note that if 0.5<γ≤ 1, we have: H = χ(S_x^2+γ S_y^2 - S^2) = -χ[S_z^2 + (1-γ)S_y^2]. which is equivalent to χ [S_x^2 + (1-γ)S_y^2] if we switch the x-axis and the z-axis. Hence, we only need to consider the situation when 0≤γ≤ 0.5. Specially, when γ=0 (γ=0.5), the LMG Hamiltonian reduces to the OAT (TAT) Hamiltonian. § SPIN SQUEEZING OF THE LMG MODEL To describe the properties of SSS, we investigate the squeezing parameter given by Kitagawa and Ueda <cit.>: ξ^2 = 4(Δ S_n⃗_⊥)^2/N, where subscript n⃗_⊥ refers to an arbitrary axis perpendicular to the mean spin direction, where the minimum value of (Δ S)^2 is obtained. The inequality ξ^2 < 1 indicates that the state is squeezed. The Hamiltonian of the LMG model is a typical kind of nonlinear interaction, which produces SSS by time evolution. We choose the coherent spin states as the initial states, which can be described by |θ,ϕ⟩ = e^iθ(S_xsinϕ - S_ycosϕ)|j,j⟩, where θ is the angle between the z-axis and the collective-spin vector (polar angle), while ϕ is the angle between the x-axis and the vertical plane containing the collective-spin vector (azimuth angle). Typical examples of the time evolution of ξ^2 are presented in Fig. <ref>. It reveals that the squeezing parameter reaches a local minimum in a short time scale. For a certain γ, the minimum squeezing parameter and the corresponding time varies with the initial θ and ϕ. In Fig. <ref>(a), we plot the color map of the minimum squeezing parameter as functions of the initial ϕ and θ for fixed γ (for example, γ = 0.25). It reveals that the optimal initial state with best squeezing is |θ,ϕ⟩=|π/2,π/2⟩. Similarly, we change γ and plot the color maps, with the optimal initial ϕ and θ plotted in Fig. <ref>(b) and Fig. <ref> (c). We can see that when γ varies from 0 to 0.5, the LMG model obtains the optimal squeezing when the initial state is |π/2,π/2⟩. This can be understood in an intuitive sense: when 0≤γ≤ 0.5, we have: H = χ[(1-γ)S_x^2 - γ S_z^2+γ S^2], which can be seen as two counter-twisting squeezing acting around the x -axis and z-axis, respectively, and along the y-axis these two effects reach the optimal cases at the same time. Thus the optimal initial state is always |π/2,π/2⟩. Now we let the initial state be the optimal case |π/2,π/2⟩, which could be realized through optical pumping and a π/2 pulse along the x-axis. And we track how the minimum squeezing parameter ξ^2 changes when γ varies from 0 to 0.5. The results are shown in Fig. <ref>. We can conclude that the squeezing performance monotonically depends on γ for 0 ≤γ≤ 0.5. When γ = 0.5, the LMG model attains its minimum ξ^2, and the corresponding time is also the shortest, corresponding to the TAT squeezing. Therefore, to reach the best squeezing performance, we should ensure the anisotropic parameter approaches γ=0.5. However, when the anisotropic parameter takes other values, the squeezing performance degrades. To solve this problem, we propose to introduce rotation pulses capable of transforming the LMG model into TAT model. § TRANSFORMING LMG INTO TAT Inspired by the previous study of transforming the OAT interaction into the TAT interaction <cit.>, our idea is to transform the LMG model into the TAT model by making use of multiple π/2 pulses, which can be realized using the coupling term Ω_αS_α (α=x,y,z). In the Rabi limit | Ω| ≫ N | χ|, nonlinear interaction can be neglected while the collective spin undergoes driven Rabi oscillation. By making use of a multi-pulse sequence along the α-axis (α=x,y,z), we can rotate the spin along the α-axis and affect the dynamic of squeezing. A π/2 pulse corresponds to ∫_-∞^+∞Ω_α(t)dt=π/2, which leads to the result that R_α,-π/2e^iS^2_βR_α,π/2=e^iS^2_κ, where R_α,θ=e^-iθ S_α and κ is the axis that perpendicular to the α-axis and β-axis. The multi-pulse sequence is periodic, and the frequency is determined by γ and the axis we choose. As shown in Fig. <ref> (a), each period is made up of the following: a -π/2 pulse along the α-axis, a free evolution for δ t_1, a π/2 along the α-axis, and a free evolution for δ t_2. The period is t_c=δt_1+δt_2, neglecting the time needed for applying the two π/2 pulses. Figure <ref> (b) shows that the cyclic ±π/2 could be viewed as rotations on the Bloch sphere. By adjusting the relationship between δ t_1 and δ t_2, we can transform the LMG model into TAT. One general Hamiltonian for TAT interaction is H_TAT=S_xS_y+S_yS_x for θ=π/2, ϕ=±π/4 <cit.>. By changing the initial states and twisting axes, the TAT interaction could also be expressed as H_TAT∝S_y^2-S_x^2, H_TAT∝ S_z^2-S_y^2 and H_TAT∝S_x^2-S_z^2. In Bloch sphere, the first expression indicates ϕ=π/2,θ=0, while the middle expression indicates ϕ=0,θ=π/2 and the last expression indicates that ϕ=π/2,θ=π/2. According to S^2_x+S^2_y+S^2_z=S^2, S^2_x-S^2_z=2(S^2_x+0.5S^2_y)-S^2, S^2 will not influence the properties of spin squeezing, we simply ignore it. For H_LMG=χ(S_x^2+γS_y^2), if we choose the z -axis to be the α-axis, the time evolution operates for a single period is the following: U_z = e^-i(S^2_x+γ S^2_y)χ t_1R_z,-π/2e^-i(S^2_x+γ S^2_y)χ t_2R_z,π/2 = e^-i(S^2_x+ γ S^2_y)χ t_1e^-i(S^2_y+ γ S^2_x)χ t_2. Using the Baker-Campbell-Hausdorff formula, we find U_z≈ e^-iχ (S_x^2(t_1+γ t_2)+S_y^2(γ t_1+t_2)) for small t. To transform the LMG model into TAT twisting, the coefficients should satisfy t_1+γ t_2/γ t_1+t_2=0.5 or 2. Then the relationship between γ and t_2/t_1 should satisfy t_2/t_1=γ -2/2γ -1 or 2γ -1/γ -2. Accordingly, we obtain the effective Hamiltonian H_z^eff = χ (γ +1)/3(S_x^2+2S_y^2), or H_z^eff = χ (γ +1)/3 (2S_x^2+S_y^2). Similarly, if we choose the y-axis to be the α-axis, we have the time evolution operator for a single period: U_y = e^-i(S_x^2+γ S_y^2)χ t_1R_y,-π /2e^-i(S_x^2+γ S_y^2)χ t_2R_y,π /2 = e^-i(S_x^2+γ S_y^2)χ t_1e^-i(S_z^2+γ S_y^2)χ t_2 To achieve TAT, we require t_2/t_1=(γ +1)/(-γ +2) or t_1/t_2=(γ +1)/(-γ +2). Then the resultant Hamiltonians read H_y^eff = χ (1-2γ )/3(2S_x^2+S_y^2), or H_y^eff = χ (2γ -1)/3 (S_x^2+2S_y^2). However, if we choose the x-axis to be the α-axis, we will find that for 0≤γ≤ 0.5, t_1/t_2≤ 0, which means it's impossible to transform the LMG model into TAT twisting making use of multi-pulse sequence along x-axis. The above pulse sequences are numerically verified with the results present in Fig. <ref>. To make the squeezing occur faster, we need to shorten the squeezing time, which means getting higher squeezing strength χ^eff. As Fig.  <ref> (a) shows, for 0≤γ≤ 0.5, pulse along z-axis gets higher effective strength, which is χ^eff_z=χ(1+ γ)/3. And the squeezing time of the z-pulse method is also shorter compared with the y-axis pulse method. Therefore, for a LMG model with arbitrary anisotropic parameter γ ranging from 0 to 0.5, we can transform it into TAT interaction by using multi-pulse along different axes, and the squeezing performance of the LMG model after the pulse sequences will also reach Heisenberg scaling, as good as the TAT case, as compared in Fig. <ref> (b). Our scheme is robust to different kinds of noises. We carry out the numerical simulation by adding Gaussian stochastic noises, i.e., assuming the fluctuating pulse areas, pulse separations, pulse stability, γ, atoms number N, and interaction strength χ, are subject to Gaussian distribution with a standard deviation of different ranges of the average value. The squeezing parameters of 100 independent simulations under different types of noises are respectively shown in Fig. <ref>. The numerical simulations show that our method is not only robust to internal systems noise such as uncertainty of determining γ, uncertainty of determining atoms number N, uncertainty of interaction strength χ but also external noise such as pulse areas noise, pulse separation noise, and pulse phase instability. Under certain kinds and ranges of noise, the best attainable squeezing of our method can almost achieve the optimal squeezing of the effective TAT dynamics. As for the spin decoherence, our method itself will not bring new resources of decoherence but extend the squeezing time, while the coherence time for atoms such as Dysprosium is very long in spin squeezing <cit.>, so we ignore the impact of extending evolution time for spin decoherence. § CONCLUSION In conclusion, we study the spin squeezing in systems with quadratic collective-spin interaction, which can be described by the LMG model. We find that the squeezing performance depends on the initial state and the anisotropic parameter. We show that the best initial state for H=χ (S^2_x+γ S^2_y) is |π/2,π/2⟩, which holds for different anisotropic parameter γ. We propose an implementable way with rotation pulses to transform the LMG model into TAT model with Heisenberg-limited spin squeezing. We find that pulse sequences applied along the z-axis will result in larger squeezing strength χ^eff_z=χ(1+γ)/3 compared to the pulse sequences along the y-axis χ^eff_y=χ (1-2γ)/3. Besides, our scheme is robust to noise in pulse areas and pulse separations. Our work will significantly increase the systems that could reach the Heisenberg scaling and will push the frontier of quantum metrology. § ACKNOWLEDGEMENT We thank Prof. Fu-li Li for helpful discussions. This work is supported by the Key-Area Research and Development Program of Guangdong Province (Grant No. 2019B030330001), the National Natural Science Foundation of China (NSFC) (Grant Nos. 12275145, 92050110, 91736106, 11674390, and 91836302), and the National Key R&D Program of China (Grants No. 2018YFA0306504). 99 1 D. J. Wineland, J. J. Bollinger, W. M. Itano, F. L. Moore, and D. J. Heinzen, Phys. Rev. A 46, R6797 (1992). 2 M. Kitagawa and M. Ueda, Phys. Rev. A 47, 5138 (1993). 3 W. W. Ho and D. A. Abanin, Phys. Rev. B 95, 094302 (2017). 4 A. Sorensen, L.-M. Duan, J. I. Cirac, and P. Zoller, Nature 409, 63 (2001). 5 L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Rev. Mod. Phys. 80, 517 (2008). 6 C. Orzel, A. K. Tuchman, M. L. Fenselau, M. Yasuda, and M. A. Kasevich, Science 291, 2386 (2001). 7 E. Pedrozo-Penafiel, S. Colombo, C. Shu, A. F. Adiyatullin, Z. Li, E. Mendez, B. Braverman, A. Kawasaki, D. Akamatsu, Y. Xiao, and V. Vuletic, Nature 588, 414 (2020). 8 R. Kaubruegger, P. Silvi, C. Kokail, R. van Bijnen, A. M. Rey, J. Ye, A. M. Kaufman, and P. Zoller, Phys. Rev. Lett. 123, 260505 (2019). 9 S. Colombo, E. Pedrozo-Penafiel, A. F. Adiyatullin, Z. Li, E. Mendez, C. Shu, and V. Vuletic, Nature Physics 18, 925 (2022). 10 J. G. Bohnet, B. C. Sawyer, J. W. Britton, M. L. Wall, A. M. Rey, M. Foss-Feig, and J. J. Bollinger, Science 352, 1297 (2016). 35P. Cappellaro and M.D.Lukin, Phys.Rev.A.80.032311. 36D. Farfurnik, Y. Horowicz, and N. Bar-Gill, Phys.Rev.A.98.033409 11 L.-G. Huang, F. Chen, X. Li, Y. Li, R. Lü, and Y.-C. Liu, npj Quantum Information 7, 168 (2021). 12 S. Zhou and L. Jiang, Phys. Rev. Research 2, 013235 (2020). 13 H. Zhou, J. Choi, S. Choi, R. Landig, A. M. Douglas, J. Isoya, F. Jelezko, S. Onoda, H. Sumiya, P. Cappellaro, H. S. Knowles, H. Park, and M. D. Lukin, Phys. Rev. X 10, 031003 (2020). 14 C. D. Marciniak, T. Feldker, I. Pogorelov, R. Kaubruegger, D. V. Vasilyev, R. van Bijnen, P. Schindler, P. Zoller, R. Blatt, and T. Monz, Nature 603, 604 (2022). 15 T. Ilias, D. Yang, S. F. Huelga, and M. B. Plenio, PRX Quantum 3, 010354 (2022). 16 M. A. C. Rossi, F. Albarelli, D. Tamascelli, and M. G. Genoni, Phys. Rev. Lett. 125, 200505 (2020). 17 Q. Liu, L.-N. Wu, J.-H. Cao, T.-W. Mao, X.-W. Li, S.- F. Guo, M. K. Tey, and L. You, Nature Physics 18, 167 (2022). 18 S.-F. Guo, F. Chen, Q. Liu, M. Xue, J.-J. Chen, J.-H. Cao, T.-W. Mao, M. K. Tey, and L. You, Phys. Rev. Lett. 126, 060401 (2021). 19 F. Yang, Y.-C. Liu, and L. You, Phys. Rev. Lett. 125, 143601 (2020). 20 S. P. Nolan, S. S. Szigeti, and S. A. Haine, Phys. Rev. Lett. 119, 193601 (2017). 21 C. L. Degen, F. Reinhard, and P. Cappellaro, Rev. Mod. Phys. 89, 035002 (2017). 22 W. Qin, Y.-H. Chen, X. Wang, A. Miranowicz, and F. Nori, Nanophotonics 9, 4853 (2020). 23 A. Kuzmich, L. Mandel, and N. P. Bigelow, Phys. Rev. Lett. 85, 1594 (2000). 24 Y. C. Liu, Z. F. Xu, G. R. Jin, and L. You, Phys.Rev. Lett. 107, 013601 (2011). 25 S. Dusuel and J. Vidal, Phys. Rev. B 71, 224420 (2005). 26 A. Glick, H. Lipkin, and N. Meshkov, Nuclear Physics 62, 211 (1965). 27 H. Lipkin, N. Meshkov, and A. Glick, Nuclear Physics 62, 188 (1965) 28 J. Vidal, G. Palacios, and C. Aslangul, Phys. Rev. A 70, 062304 (2004). 29 J. Ma and X. Wang, Phys. Rev. A 80, 012318 (2009). 30 T. E. Lee, F. Reiter, and N. Moiseyev, Phys. Rev. Lett. 113, 250401 (2014). 31 J. I. Cirac, M. Lewenstein, K. Molmer, and P. Zoller, Phys. Rev. A 57, 1208 (1998). 32 I. Carusotto and C. Ciuti, Rev. Mod. Phys. 85, 299 (2013). 33 Zeyang Li, Simone Colombo, Chi Shu, Gustavo Velez, Saúl Pilatowsky-Cameo, Roman Schmied, Soonwon Choi, Mikhail Lukin, Edwin Pedrozo-Peñafiel and Vladan Vuletić, arXiv:2212.13880 [quant-ph]. 34Thomas Chalopin, Chayma Bouazza, Alexandre Evrard, Vasiliy Makhalov, Davide Dreon, Jean Dalibard, Leonid A. Sidorenkov and Sylvain Nascimbene, Nat Commun 9, 4955 (2018).
http://arxiv.org/abs/2306.06040v1
20230609170553
Reconstructing Human Expressiveness in Piano Performances with a Transformer Network
[ "Jingjing Tang", "Geraint Wiggins", "George Fazekas" ]
cs.SD
[ "cs.SD", "cs.LG", "eess.AS" ]
Reconstructing Human Expressiveness in Piano Performances with Transformer Jingjing Tang et al. Center for Digital Music, Queen Mary University of London Vrije Universiteit Brussel [email protected] Reconstructing Human Expressiveness in Piano Performances with a Transformer Network Jingjing Tang1 Geraint Wiggins12 György Fazekas1 This work is supported by the UKRI Centre for Doctoral Training in Artificial Intelligence and Music. J.Tang is a research student supported jointly by the China Scholarship Council and Queen Mary University of London. G. Wiggins received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen”.We would like to thank Lele Liu and Jiawen Huang for their valuable feedback to improve our work. Received …; accepted … =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Capturing intricate and subtle variations in human expressiveness in music performance using computational approaches is challenging. In this paper, we propose a novel approach for reconstructing human expressiveness in piano performance with a multi-layer bi-directional Transformer encoder. To address the needs for large amounts of accurately captured and score-aligned performance data in training neural networks, we use transcribed scores obtained from an existing transcription model to train our model. We integrate pianist identities to control the sampling process and explore the ability of our system to model variations in expressiveness for different pianists. The system is evaluated through statistical analysis of generated expressive performances and a listening test. Overall, the results suggest that our method achieves state-of-the-art in generating human-like piano performances from transcribed scores, while fully and consistently reconstructing human expressiveness poses further challenges. § INTRODUCTION An expressive music performance goes beyond playing the notes in the score correctly. Following annotations in music sheets, performers interpret the music with different degrees of expressive control including articulation and dynamics to express emotions and provide an individual rendition of the music, resulting in different performance styles <cit.>. A common way of rendering expressive performances with computational models is to meaningfully tune the velocity and timing of notes in the score to reconstruct human expressiveness <cit.>. Generally modelling human expressiveness requires capturing the differences between scores and human performances in expressive features including tempo, timing, dynamics, and so on. Learning the subtle nuances in expression among individual pianists demands the model to learn much smaller perceivable differences within those expressive features. In recent years, deep learning (DL) models have shown promising results in music generation and representation learning. In particular, the Transformer architecture has gained popularity due to its ability to capture long-range dependencies and contextual information in sequential data. While many studies have successfully applied Transformer architecture to algorithmic music composition <cit.> and representation learning for symbolic music <cit.>, few works pay attention to modeling performance expressiveness independently. In the field of expressive performance rendering (EPR), recent studies have achieved convincing results for the purpose of reconstructing general human expressiveness and controlling style using DL architectures including Recurrent Neural Network <cit.>, Graph Neural Network <cit.> and conditional Variational Autoencoder <cit.>. These models require large-scale accurate alignments of well-annotated music scores and performances. However, due to the limited quality and size of the currently available datasets, including the Vienna 4x22 Piano Corpus <cit.> and ASAP <cit.>, these systems still have difficulty dealing with playing techniques such as pedalling and trills, recovering expressiveness overarching longer passages of music, as well as modeling the performance style of individual players. In this paper, we propose a novel approach for reconstructing human expressiveness with a multi-layer bi-directional Transformer encoder. Training a Transformer model for this task demands large amounts of accurately recorded and score-aligned performance data, which is not currently readily available. A recently released performance-to-score transcription system <cit.> and the transcribed expressive piano performance dataset ATEPP <cit.> allow us to use transcribed scores and performances to train our model. Using transcribed scores in the EPR task can be beneficial when the canonical score is not representative enough. For example, jazz performances rely heavily on improvisation, making it difficult to align canonical scores with performances. Even in classical music, ornaments such as trills may not be explicitly notated in canonical scores, which poses problems for the alignment process. Moreover, the reconstruction of human expressiveness from transcribed scores can support research in musical style transfer, particularly when people aim to change a performance by one pianist into the style of another. Considering this, we investigate the ability of our system to model the expressiveness for individual pianists and evaluate it through statistical analysis of the generated performances and a listening test comparing our model to state-of-the-art expressive performance rendering systems. The rest of this paper is organized as follows: Section <ref> describes the methodology detailing the dataset, the process of feature extraction and the model architecture. Section <ref> introduces the experiment setting-ups for training our model. Section <ref> presents the results of quantitative analysis and the listening test as well as discussions upon the results, and finally, Section <ref> concludes the paper. § METHODOLOGY §.§ Problem Definition Expressive performance rendering (EPR) is commonly defined as the task of generating human-like performances with music sheets as input. Most existing work <cit.> proposes systems using recorded performances and canonical scores to solve the problem. All of these systems require alignment between the canonical scores and performances, which is limited in accuracy given the available datasets and alignment algorithms. With the purpose of reconstructing human expressiveness given a composition, we reformulate the task by relaxing the requirement for using conventional music sheets as input, in order to take advantage of the recent performance-to-score transcription algorithms <cit.> and large transcribed performance datasets <cit.>. We will provide more details about the transcription algorithm and the dataset used in this work in Sections <ref> and <ref>. As shown in Fig. <ref>, the EPR task, in our definition, is to take the transcribed scores as input and reconstruct human expressiveness by generating expressive performances that are similar to the transcribed human performances. §.§ Dataset The recently released ATEPP dataset <cit.> provides high-quality transcribed piano performances by world-renowned pianists. According to a listening test conducted by Zhang et al., the transcribed performance MIDIs reliably retain the expressiveness of performers. The dataset includes multiple performances of the same composition by different pianists, allowing comparison in expressiveness among different performers. However, since the ATEPP dataset has a highly skewed distribution of performers, rather than using the whole dataset, we use a subset <cit.> that balances the number of performances by six pianists: Alfred Brendel, Claudio Arrau, Daniel Barenboim, Friedrich Gulda, Sviatoslav Richter, and Wilhelm Kempff. Compositions in this subset are mainly composed by Beethoven with only two pieces by Mozart. Each of the compositions corresponds to at least one performance by each pianist. Table  <ref> presnts descriptive statistics of the subset in comparison with datasets used by other EPR systems. §.§ Data Processing §.§.§ Score Transcription Similarly to other EPR systems <cit.>, our method requires note-to-note alignment between the input score MIDI and the output performance MIDI. Despite the convincing alignment results of the state-of-the-art algorithm proposed by Nakamura et al. <cit.>, the algorithm shows difficulty in dealing with repeated sections as well as trills in classical piano music, which causes unexpected loss of information during the alignment process. Instead of using the original or manually edited scores of the compositions, we obtained the transcribed scores of the performances through a performance-to-score transcription algorithm proposed by Liu et al. <cit.>. The transcribed score midi data can be aligned with the performances at the note level without losing any structural generality in the music <cit.>. The transcription algorithm performs rhythm quantisation through a convolutional-recurrent neural network and a beat tracking algorithm to remove expressive variations in timing, velocity, and pedalling. While expressiveness regarding velocity and pedalling is certainly erased through the process, how much expressiveness is remained in timing is implicit and will be discussed further in Section <ref>. §.§.§ Data Augmentation The transcribed scores are first scaled to the same length as the corresponding performances. We then augment the data by changing the tempo for both performances and the scores. For each pair of performance and score midis, the onset time, offset time and duration of each note are multiplied by a ratio r_i ∈ [0.75, 1.25]. In total, we have each pair augmented by multiplying 10 different ratios that are evenly spaced along the interval grid. §.§.§ Feature Encoding Features related to performance expressiveness are extracted and tokenized to reduce the the dimensionality of the input space. Following the tokenisation method, OctupleMIDI, proposed by Zeng et al. <cit.>, we encode the note-level features including pitch, velocity, duration, bar, and position. Table <ref> shows the vocabulary size of our tokens for each feature. When using OctupleMIDI, the onset time of a note N_i is represented jointly by its bar number B_i and position number P_i, where i = 1, 2, …, n and n denotes the length of the note sequences. Given that we use a piano music dataset, we consider only pitches with numbers ranging from 21 to 109. The duration of notes is set to be linearly proportional to the token value D_i. All of the midi files have a resolution of 384 ticks per beat, and we default each bar to have 4 beats, resulting in 384 × 4 = 1536 different positions per bar. We calculate values of other two note-level performance features which are commonly used for capturing the expressiveness of piano performances <cit.> based on the tokens: * Inter-Onset Interval (IOI): the time interval between the onset time (OT) of the note N_i and that of the next note N_i+1: IOI_i = { OT_i+1 - OT_i, i=1, 2, …, n-1 0, i=n . where OT_i = B_i × 1536 + P_i, i = 1, 2, …, n * Duration Deviation (DD): the difference between duration token values of a note in performance midi and score midi DD_i = Dp_i - Ds_i, i = 1,2,…, n where Dp is the duration obtained from the performance midi and Ds is that from the score midi. §.§ Generation with Transformer Encoder §.§.§ Input and Output Features Input and output features are carefully designed to preserve the score content while allowing changes in the performance control of each note. The input features include pitch, velocity, duration, bar, position, and inter-onset interval from the score midis. As for the output, we infer values of three features including velocity, DD, and IOI in the performance midis. Following Eq. <ref> and Eq. <ref>, we can calculate the predicted token values of duration, position, and bar for each note based on DD and IOI. Combined with the predicted token values for velocity, we can construct a performance MIDI file through detokenization. §.§.§ Model Architecture Inspired by the MidiBert model proposed by Chou et al. <cit.>, we design a multi-layer bi-directional Transformer encoder with 4 layers of multi-head self-attention where each has 4 heads and a hidden space dimension of 128. The pianist's identity is represented using a one-hot encoding embedding, which is then concatenated to the last hidden state before the final prediction, as shown in Fig  <ref>. As velocity and timing in music are continuous variables, the interval between two token values is informative in representing the distinction of playing a note. Most transformers trained for music generation <cit.> take different token values as independent classes which makes this information implicit to the model. Our system instead uses the tokens without creating embeddings, and predicts the token values for different features through regression. In addition, we add activation functions after the inference layer to clamp the predicted values, ensuring that they fall into the ranges of different features. §.§.§ Loss Design The losses ℒ_v, ℒ_dd, and ℒ_ioi for velocity, DD, and IOI features are calculated respectively, following the loss function defined in Eq. <ref> which represents the percentage of how much the predicted values y deviated from the target values ŷ. Masks are created to exclude loss calculation for padded tokens. ℒ_feature = ∑^n_i=0 l(y_i)m_i, where m_i represents the loss mask for the i-th note and l(y_i) = { |y_i-ŷ_̂î|/|ŷ_̂î|, if ŷ_̂î≠ 0 α|y-ŷ_̂î|, if ŷ_̂î =0 . The parameter α regularizes the loss calculation when the target value is zero and is experimentally set to 0.001. The total loss is calculated by ℒ_total = w_vℒ_v + w_ddℒ_dd + w_ioiℒ_ioi where weights are empirically initialized and assigned to each feature loss respectively. §.§ Evaluation The system is objectively evaluated through validation losses and statistical distributions of expressive parameters in generations, presented in Section  <ref>. Additionally, we evaluate the perceived expressiveness of generated performances through a subjective listening test. As the aim of EPR task is to generate performances with human-like expressiveness <cit.>, we assume that the more similar a model's output is to a human performance, the more effectively expressive it is. We recruit participants who have experience in playing musical instruments and who are engaged with classical music, and ask them to rate the presented samples by evaluating how expressive, natural, and human-like they are. The detailed experiment design and conditions and the results of the listening test are presented in Section  <ref>. § EXPERIMENTAL SETUP We implement our model based on the PyTorch. We have a 8:1:1 data split in the number of piece and performance, and we cut or pad the token sequences into sequences of 1000 notes before inputting into our transformer. The model is trained with a batch size of 16 sequences for at most 400 epochs, using the Adam optimizer with an initial learning rate of 1e-4 and a weight decay rate of 1e-7. We update the learning rate using the cosine annealing warm restart scheduler <cit.> since it has been shown to result in faster convergence during training, compared with other learning rate scheduling strategies. If the validation loss does not improve for 30 consecutive epochs, we stop the training process early. The training converges in 2 days on two RTX A5000 GPUs. Different vocabulary sizes of expressive features shown in Table <ref> result in different degrees of complexity when modeling. Consequently, we observed unbalanced decrease in losses and overfitting across learning for different features with constant weights assigned to each feature loss. To balance training and reduce overfitting, we optimize the training process using the GradNorm algorithm proposed by Zhao et al. <cit.> to dynamically update weights based on gradients calculated at the end of each training epoch. § RESULTS §.§ Quantitative Evaluation Quantitative methods for evaluating expressive performance rendering systems are limited. One approach <cit.> is to calculate the loss for each performance feature. Unlike existing approaches <cit.> where the features are not tokenised, our system computes the losses using the token values. Based on the feature encoding process and the loss design discussed in Section <ref>, we estimate the average prediction errors in MIDI quantised velocity value and seconds, shown in Table <ref>. Although the results are not directly comparable to existing works because of the differences in feature extraction and loss design, they indicate that the transformer model could learn the patterns of expressive variations and reproduce them in the transcribed scores. However, the average errors at the note level in generations are still noticeable to human ears <cit.>, and can affect the perceived expressiveness of the generated music in comparison to human performances. Since the level of expressiveness regarding timing left in the transcribed scores is implicit as discussed in Section <ref>, we evaluate the ability of our system to reconstruct the expressiveness for individual pianists through the velocity distributions obtained from kernel density estimation <cit.>. As shown in Fig. <ref>, velocity distributions for each pianists are distinguishable, indicating different performing styles. However, performance recording environments may have impact on the transcribed velocity values <cit.> and contribute to differences of the distributions. The distributions of the generations based on transcribed scores (G-TS) and those of the human performances (P) have a high degree of overlap, providing evidence of learning individual expressiveness through the training. §.§ Subjective Evaluation A listening test was performed to evaluate the perceived expressiveness of our model's output. We recruited 19 people who had some level of music training through email. All participants have learned a musical instrument, while over half of our participants had been engaged with classical music for over 5 years. The participants completed the study anonymously. The stimuli consisted of four 20s classical piano excerpts detailed in Table  <ref>. For each excerpt, the human performance (P) was provided as a reference to be compared with four MIDI renderings: the generation based on the transcribed score (G-TS), the generation by the state-of-the-art VirtuosoNet <cit.> using the canonical score (V), a direct rendering of the transcribed score (TS), and finally the canonical score (S) without expression. The human performances were transcribed piano performance MIDIs from the ATEPP dataset <cit.> and were included as one of the stimuli as well. All the MIDIs were synthesised into audio recordings through GarageBand to ensure consistency in the listening experience. For each piano excerpt, six recordings, the reference plus 5 stimuli, were presented in the test [Listening samples are provided at <https://drive.google.com/drive/folders/1nfaZ23vr8xZHlyhTAAppK2hl-aHQPigP?usp=sharing>]. Participants were asked to listen to five stimuli, and rate the degree of expressiveness for them on a 100-point scale by comparing each of them with the reference human performance. During the test, we explicitly ask participants to rate based on the expressive differences among the stimulus with more focus on the performance features such as the dynamics and tempo changes rather than the compositional content. We encouraged them to use the full scale, rating the best sample higher than 80 and the worst lower than 20. We adopt the MUSHRA framework <cit.> to conduct the test using the Go Listen platform <cit.>. In total, 380 ratings from the 19 listeners were collected. We filtered out raters who could not identify the difference in expressiveness between the anchor (S) and the reference (P). Fig <ref> shows the mean opinion scores (MOS) and the results of Wilcoxon signed rank test for the differences between: (a) TS versus S, (b) G-TS versus V, (c) P versus G-TS, (d) P versus V, (e) G-TS versus TS. According to the results, human performances (P) are significantly different from generations of our model (G-TS) and VirtuosoNet (V) in most situations. The outputs of our model (G-TS) are overall preferred over the performances produced by VirtuosoNet (V) significantly (0.01 < p < 0.05), receiving trivially lower (not significant) ratings for piece A and B but higher (significant for C and not significant for D) ratings for the compositions that never appear in the training dataset. Comparing with canonical scores (S), transcribed scores (TS) get significantly higher ratings from listeners. Ratings of the generations by our system (G-TS) are significantly higher than those of the direct audio rendering of transcribed scores (TS) for most pieces except D. These results suggest that our system achieves the state-of-the-art and even outperforms the VirtuosoNet <cit.> in some cases, although neither of the systems can consistently generate the same level of expressiveness as human performances. On the other hand, while the transcribed scores (TS) could have more expressiveness than the canonical scores (S), the generations from the transcribed scores (G-TS) are perceptually more expressive than the transcribed scores (TS) in most cases, indicating the success of reconstructing human expressiveness. The success has also been proven by the overall difference (0.01 < p < 0.05) in MOS between our generations (G-TS) and generations from the VirtuosoNet (V). §.§ Case Study: Comparison in Dynamics and Duration Building on the promising results of our system in the listening test of Piece C, we conducted a more detailed analysis to compare the expressive variations in dynamics and duration among human performances, system-generated performances, and scores. Specifically, in Fig. <ref>, we present the fluctuations in velocity and duration across the note sequences. Compared with the VirtuosoNet generation (V), the generation of our system (G-TS) could capture both short-term and long-term velocity variations better. Even when inputting the unseen canonical score, the generation of our system (G-S) outperforms the other model in terms of reconstructing velocity variations. Meanwhile, the strong similarity between duration changes in the human performance (P) and transcribed score (TS) suggest that the transcription algorithm <cit.> alters the timing information of the notes cautiously with only limited modification of the duration. Therefore, the reconstruction of the expressive variations in timing through our system could be restricted. The limitation is also demonstrated by the duration changes of our system's generation based on the canonical score (G-S). § CONCLUSION This paper presents a novel method for reconstructing human expressiveness in classical piano performances. Our expressive performance rendering system consists of a Transformer encoder trained on transcribed scores and performances. The quantitative evaluation and listening test show that the proposed method succeed in generating human-like expressive variations, especially for dynamics. Moreover, our method could be used for modeling the differences in expressiveness among individual pianists. In future work, we will train our system with a mixture of the canonical scores and transcribed scores to create a more robust system. We will further improve the capacity of our system on modeling individual performance styles possibly through contrastive learning. In addition, we will consider a separate system to model pedalling techniques in performances or try to integrate the pedalling information into the current feature encoding.
http://arxiv.org/abs/2306.09219v1
20230615155519
Modeling the Mg I from the NUV to MIR. II. Testing stellar models
[ "J. I. Peralta", "M. C. Vieytes", "A. M. P. Mendez", "D. M. Mitnik" ]
astro-ph.SR
[ "astro-ph.SR" ]
II. Testing stellar models Instituto de Astronomía y Física del Espacio, CONICET–Universidad de Buenos Aires, Argentina [email protected]; [email protected] Departamento de Ciencia y Tecnología, UNTREF, Argentina Departamento de Física, Universidad de Buenos Aires, Argentina Reliable atomic data are mandatory ingredients to obtain a realistic semiempirical model of any stellar atmosphere. Due to their importance, we further improved our recently published atomic model. We tested the new atomic model using atmospheric models of stars of different spectral types: the Sun (dG2), HD22049 (dK2, Epsilon Eridani), GJ 832 (dM2), and GJ 581 (dM3). Significant improvements have been included in the atomic model, mainly to the electron impact excitation (Υ_ij) values. We used new Breit-Pauli distorted-wave (DW) multiconfiguration calculations, which proved to be relevant for many transitions in the mid-infrared (MIR) range. The new atomic model of includes the following: i) recomputed (Υ_ij) data through the DW method, including the superlevels. ii) For the nonlocal thermodynamic equilibrium (NLTE) population calculations, 5676 theoretical transitions were added (3001 term-to-term). iii) All of these improvements were studied in the Sun and the stars listed above. Comparisons for the distribution of magnesium among the first ionization states and the formation of molecules, as well as for the population of the different energy levels and atmospheric heights, were carried out. Several lines, representative of the spectral ranges, were selected to analyze the changes that were produced. In particular, we exemplify these results with the problematic line 2853.0 Å, a transition between the third level and the ground state. The magnesium distribution between ionization states for stars with different effective temperatures was compared. For the Sun and Epsilon Eridani, predominates with more than 95%, while for GJ 832 and GJ 581, represents more than 72% of the population. Moreover, in the latter stars, the amount of magnesium forming molecules in their atmosphere is at least two orders of magnitude higher. Regarding the NLTE population, a noticeable lower variability in the departure coefficients was found, indicating a better population coupling for the new model. Comparing the synthetic spectrum calculated with the older and new atomic model, these results show minimal differences in the visible range but they are stronger in the infrared (IR) for all of the stars. This aspect should be considered when using lines from this region as indicators. Nevertheless, some changes in the spectral type were found, also emphasizing the need to test the atomic models in different atmospheric conditions. The most noticeable changes occurred in the far-ultraviolet (FUV) and near-ultraviolet (NUV), obtaining a higher flux for the new atomic model regardless of the spectral type. The new model did not prevent the formation of the core emission in the synthetic NUV line 2853.0 Å. However, by including other observations, we could note that the emission indeed exists, although with a much lower intensity. Further tests have shown that to reduce the emission, the population of its upper level (3s3p ^1P ) should be reduced by a factor of about 100. Modeling the from the NUV to MIR J. I. Peralta1,2 M. C. Vieytes 1,2 A. M. P. Mendez1 D. M. Mitnik 1,3 ========================================================================================== § INTRODUCTION In the last decades, the discovery of many exoplanets orbiting in the habitable zone of late-type stars has shown the need for reliable calculations of the spectral energy distribution (SED) of those stars, from IR to X-rays. Obtaining these values is a task suitable for well-established stellar atmospheric models. Stellar models are essential for obtaining information inaccessible to direct observation, such as from the extreme-ultraviolet (EUV) radiation or Lyα emission. They also allow for the indirect estimation of the characteristics of a star, including luminosity, magnetic activity, and metallicity. Using a synthetic spectrum, it is possible to calculate the amount of nonthermal energy that must be delivered to the atmosphere to reproduce the observed spectrum, thus constraining the possible physical processes involved. When improving an atomic model, testing it in the solar atmosphere is the best starting point. The possibility of observing with spatial resolution allows us to improve the accuracy of the model and even the capacity to model the different structures present in the solar atmosphere separately <cit.>. In <cit.> (Paper I, hereafter), we show the importance of having reliable atomic data, which allows the atmospheric model to achieve a realistic behavior when calculating the atomic populations in situations out of local thermodynamic equilibrium (LTE), and thus produce spectral lines according to the observations. The calculation of an atmospheric model depends on relevant atomic parameters at the quantum scale to describe the physical processes that take place and on the thermodynamic parameters of the plasma that comprises it. Since the same atomic model is used under different conditions, it is essential to test its validity in plasma with different thermodynamic parameters, which is the case of stellar atmospheres of different spectral types. Several authors performed this type of study with neutral magnesium for different purposes and methods, as we describe in Paper I (e.g., <cit.>, etc.). More recently, and motivated by the controversy over the abundance of the solar photospheric oxygen, <cit.> improved the calculation by building a more reliable oxygen atomic model. They followed a similar atomic-parameter approach to the one included in our work in Paper I for ; the authors calculated the excitation rates by electron collisions using the Breit-Pauli distorted wave (DW) method through the autostructure code <cit.>. However, none of the aforementioned works reproduce spectral lines of in the UV, mid-infrared (MIR), and far-infrared (FIR), except for particular lines at micrometer wavelengths. dM stars are also not included in their samples. Since the preferred targets for planet-hunting are dM stars, and this spectral type is bright in the IR range, it is important to have a reliable estimate of the radiation in this range to characterize these objects. The successful launch and deployment of the James Webb Space Telescope (JWST), which operates in the visible and MIR, emphasizes this need. On the other hand, in addition to the IR region, photochemistry and mass loss in planetary atmospheres are caused by UV radiation. For this reason, its correct description is essential for characterizing an exoplanetary atmosphere <cit.>. One of the most prominent features of in the near-ultraviolet (NUV) range is the 285.3 nm (3s^2 ^1S_0 - 3p ^1P_1) line profile, which formed in the chromosphere of late-type stars. <cit.>, and then <cit.> built atmospheric models of the dM star GJ 832 from the photosphere to the corona using solar-stellar radiation physical modeling (SSRPM) <cit.>. SSRPM is a variant of solar radiation physical modeling (SRPM) version 2 (presented in <cit.>, and detailed in Paper I), which extends atmospheric modeling to stars of different spectral types. The atmospheric models produced in both papers show a good overall fit of the observed spectrum but an incorrect fit of the 285.3 nm line. According to the authors, this behavior occurs due to uncertainties in the atomic data. For this work, we have extended the study carried out in Paper I in two ways: on the one hand, we generated an improved version of the atomic model of presented in that paper; and on the other hand, we tested this new atomic model on the atmospheric models of the Sun, and on three stars of spectral types cooler than the Sun's. The new atomic model shares the same features as the “1401c” (“c,” hereafter) of Paper I, but we improved the radiative and electron impact excitation data of for the higher levels. Both models have identical data for the effective collision strengths (Υ_ij) up to and including level 54 (3s7i ^1I, 59 430.52 cm^-1). That is, they include data computed by <cit.> via the convergent close-coupling (CCC) method for the lowest 25 levels and the DW calculations of Paper I for transitions between levels 26 (3s5g ^1G, 57 262.76 cm^-1) and 54. However, in our new atomic model, the difference lies in the data for transitions from (and to) levels with energies higher and equal to superlevel 55 (59 649.15 cm^-1). For these levels, we included electron impact excitation values obtained by using the DW approximation implemented in the autostructure code. In this way, the 85-level model (with superlevels from level 55 onward) no longer uses the, already traditional, semiempirical methods of <cit.> and <cit.> for the calculation of Υ_ij - employed for nonlocal thermodynamic equilibrium (NLTE) population computations - in transitions that are extremely important for the formation of lines up to the MIR. Regarding the atmospheric models selected, we used the solar model presented in Paper I by <cit.> and three other models: HD 22049 dK2V (“Epsilon Eridani,” hereafter) by <cit.>, GJ 832 dM2 (T_eff=3590 K and R/R_⊙=0.5), and GJ 581 dM3 (T_eff=3498 K and R/R_⊙=0.3) by <cit.>. This paper is structured as follows: The code used in our NLTE and spectra calculations and the initial atomic model are detailed in Section <ref>. Section <ref> shows the different atomic models built. A general description of the atmospheric models for the Sun and the three abovementioned stars is given in Section <ref>. The observations used for comparing our synthetic spectra are described in Section <ref>. Our results and discussion are presented in Section <ref>, while our final remarks and conclusions are provided in Section <ref>. § NLTE POPULATION AND SPECTRUM CALCULATIONS The general procedure for calculating the populations in NLTE and the emergent spectrum is generated similarly as we described in Paper I. In this case, the Solar Stellar Radiation Physical Modeling (SSRPM) version 2 <cit.> was used. SSRPM is a variant of the SRPM used in Paper I <cit.>, which extends the atmospheric modeling to stars cooler than the Sun. As well as the SRPM code, this library assumes hydrostatic equilibrium and solves the statistical equilibrium and radiative transport equations in a self-consistent way for an atmosphere with plane-parallel or spherical symmetry. For the NLTE atomic populations calculation, in an optically thick atmosphere and including partial redistribution (PRD), the code contains 52 neutral and low ionization state atomic species (generally up to Z^2+), H, H^-, and H_2. In addition, it allows the calculation of 198 highly ionized species using the optically thin atmosphere approximation. The SSRPM adds the calculation of molecule formation in LTE, which includes molecular sequestration of elements. For spectral lines, 435 986 transitions are included, along with more than 2 000 000 molecular lines (with data from <cit.>) of the 20 most abundant and important diatomic molecules for dM stars (e.g., TiO). In other words, it allows the construction of atmospheric models not only for the different observed characteristics on the solar disk but also for stars of different spectral types. In <cit.>, the model for the dM star GJ 832 was presented, while the model for Epsilon Eridani was built in <cit.>. In <cit.>, the atmospheric model of GJ 832 is improved, and a new model for the dM star GJ 581 is presented. The model of Epsilon Eridani and the latest versions of GJ 832 and GJ 581 will be used in this work. § CONVERGENT CLOSE COUPLING & DISTORTED WAVE FOR ECS. MODEL D The new model presented in this work (designated as “d,” in continuity with the models from Paper I) shares many of the characteristics of model “c” from the previously mentioned work (summarized in Table <ref>). The main differences between model d and c lie in the radiative and collisional data used to calculate NLTE populations. Calculation of radiative and collisional rates. The radiative and collisional rates included in atomic models c and d were computed using the autostructure code <cit.>. The calculation considered a combination of 85 configurations in the LS coupled scheme, which included the 3s^2 ground state, the single excited 3s nl states (n ≤ 20), and the double excited states 3p nl (n ≤ 4, l ≤ n-1) and 3d^2. In addition, all energy values for the terms were corrected with experimental measurements from the NIST database. The resulting atomic structure consists of 189 terms and 339 levels. We implemented the distorted-wave Breit-Pauli perturbative method (DW), which is included in autostructure to calculate electron impact excitation. From the collision strengths (CS), we derived the effective collision strengths (Υ_ij) between levels i and j, using the CS parametrization from <cit.>. The atomic model designed for the radiative and collisional calculation (189 terms) differs from the models used in this work (85 terms). The energy structure of the 85-level[We follow the common nomenclature “level” to refer to the ^2S+1L term, and “sublevel” when referring to a fine-structure ^2S+1L_J level.] models, presented in Paper I and this work, was based on experimentally observed levels. In contrast, all theoretical levels are considered in our calculation. In addition, the 85-level models use superlevels, which are not formally included in the electronic structure theory. To resolve this, we modified the calculated theoretical structure (terms, levels, and radiative and collisional transitions) to match the 85-level models. We discarded nonobserved levels and combined terms in the proposed superlevels. We calculated the radiative and collisional transitions that use superlevels using the superlevel formalism proposed by <cit.>. For the selection of spectral lines in NIST (described in Paper I), we used the condition of intense lines: log (gf) > -1, which requires an atomic model with 54 detailed levels (extracted from the NIST[<https://www.nist.gov/pml/atomic-spectra-database>] 5.7.1 database <cit.>) to represent them. This selection resulted in a model representing 285 spectral lines of (125 if only transitions between terms are considered). In the new model d, in addition to the spectral lines mentioned above, 5674 theoretical radiative transitions between bound states were included (3001 considering only transitions between terms). Figure <ref> shows a histogram with the number of term-term radiative transitions using a certain level. It can be seen that most of the spectral lines belonging to the FUV, NUV, and visible ranges are formed at relatively low levels. On the other hand, the lines belonging to the NIR and MIR use scattered levels throughout the atomic model. Figure <ref> also shows the importance of including, in addition, higher levels necessary for collisional and radiative population exchange between other energy levels and with the following ionization states. Regarding the collisional data, the main difference between models d and c is that in the new model, semiempirical methods of <cit.> and <cit.> (SEA&VRM) are no longer employed to represent excitation by electron impact (previously included in model c for the transitions that involve levels 55 to 85 ). We replace it with the aforementioned DW calculations. We only consider DW values from or to terms equal to or greater than the term 3s 6p ^1P, indexed as level 26. Terms energy levels, effective collision strengths, and radiative data are available at the CDS.[Only available in electronic form at the CDS via anonymous ftp to https://cdsarc.cds.unistra.frcdsarc.cds.unistra.fr (130.79.128.5) or via <https://cdsarc.cds.unistra.fr/cgi-bin/qcat?J/A+A/> ] § ATMOSPHERIC MODELS: THE SUN AND THREE COOLER STARS Unlike atmospheric model grids, which are constrained by theoretical predictions, semiempirical models built with the SSRPM have as input a large number of observations in different spectral ranges. The degrees of freedom of the model are given by the need for flux-calibrated observations covering the largest possible range of wavelengths. A good semiempirical model should be able to fit all possible observations in order to be reliable. In this way, it is possible to predict regions of the spectrum that cannot be obtained by direct observations. The model is based on a set of element abundances that conform the stellar atmosphere, and a grid of heights on which calculations are performed. Each height is characterized by its temperature (T), microturbulent velocity (v_t), and numerical densities of protons (n_p), electrons (n_e), total hydrogen atoms, and neutral hydrogen atoms (n_a). In other words, our input parameters are given only by the metallicity, the surface gravity (usually obtained from literature), and the aforementioned densities. Assuming an atmospheric model to be correct, a reliable atomic model should correctly reproduce the observed spectral lines. In addition, it should also be able to correctly reproduce the observed line profiles when switching to another atmospheric model with different plasma parameters. To test the reliability of the new atomic model of (d), we used atmospheric models of stars of different spectral types, which were previously calculated with the SSRPM system. These models are briefly described below and illustrated, from the photosphere to the transition region, in the left panel of Fig. <ref> as a function of pressure. The stellar parameters of the present stars are summarized in Table <ref>. Sun. This atmospheric model is the so-called 1401 in <cit.>. It represents the most abundant structure on the solar disk in a quiescent state and was used in Paper I. For the current work, we will analyze the solar spectrum in the same way as the stars, that is to say integrated on the disk. GJ 832 and GJ 581. GJ 832 was introduced as 3346 in <cit.>. It was later improved by <cit.> (labeled as 3385) to correct an overestimation of the flux in the visible range due to incorrect calibration of the observations used to generate the model. The atmospheric model of GJ 581 is also published in the mentioned paper. In this type of cool star, the extremely high density of molecular lines formed in the photosphere makes it difficult to distinguish the continuum level. The intensity of the pseudo-continuum is given at a temperature where τ=2/3. This value corresponds to the stellar emission source because, since stars cannot be observed with spatial resolution, we observe the emerging flux for an average angle of μ=cosθ = 2/3. The authors note that, although the shape and intensity of the flux are correctly reproduced by the models, there is an overestimation of the flux in the visible range, probably due to the absence of a source of molecular opacity between 4000 and 4500 Å. Epsilon Eridani (HD 22049). <cit.> published a developing model for this star. The atmospheric model includes the photosphere, chromosphere, transition region, and corona. In the visible range, the model correctly reproduces the observed continuum and important lines that characterize the chromospheric magnetic activity. In the FUV and EUV regions, the model reproduces lines and the continuum that cannot be easily observed due to interstellar absorption. However, the authors mention that the thermal structure of the corona still needs to be modified to include observations in the X-ray range. In this work, we use the model (denoted as 2227) from the photosphere to the beginning of the transition region, together with the previously mentioned models. As observed in the left panel of Fig. <ref>, the solar model 1401 presents a higher temperature than the other stars in the photosphere (from right to left). This situation is exchanged with model 2227 for the star Epsilon Eridani around 5 × 10^2 erg/cm^2. The temperature minimum in the last model occurs at higher pressure, and therefore the first chromospheric rise occurs earlier. The temperature of its chromospheric plateau and the rise to the transition region also occur at higher pressure and temperature, indicating that this star is more active than the Sun. In the case of the 3385 and 5812 models of the stars GJ 832 and GJ 581, respectively, it can be seen that they are much cooler and are similar to each other up to their temperature minimum. From this point, as the pressure decreases, while 3385 has a steeper first chromospheric rise, 5812 has a more gradual temperature rise and, consequently, a relatively smaller chromospheric plateau until it reaches the temperature rise that marks the beginning of the transition region. These last two stars are optically not very active compared to Epsilon Eridani. In the right panel of Fig. <ref>, the electron density (ne) is plotted as a function of the pressure for each star. This figure facilitates the analysis of the importance of the e+collisional process in the region of interest. Moving through the figure from right to left, it can be seen in all cases that the electron density decreases as we move away from the photosphere up to regions close to the temperature minimum. This behavior is expected since the average density of most of the components in the atmosphere decreases together. This decrease is reverted when the chromospheric rise is reached, where a noticeable increase in ne as a product of the ionization of metals starts. Concerning the magnesium in each atmosphere, Fig. <ref> plots the atmospheric distribution of the , , and density to the total Mg density. As all figures are on the same scale, the differences can be clearly identified. In Epsilon Eridani, the distribution is similar to the solar distribution, with predominating throughout the atmosphere. Between 50 km and the temperature minimum (T_min), the population of and is almost of the same order. For altitudes above T_min, the population decreases due to the increase in the ionization rate in this region, an effect that is also visible through the increase in in a correlated manner. The predominance of is the main reason we use near-continuous superlevels, which allows population exchange between and . In the case of dM stars, the distribution is very different, with neutral magnesium predominating as the main component up to a few kilometers above T_min. The lower temperatures do not favor the ionization process, and the population of and is drastically reduced by recombination. Quantitative results of this distribution are presented in Section <ref> § OBSERVATIONS In addition to comparing computed spectra between models, we included astronomical observations whenever possible. For the Sun, we used the observations described in Paper I: for the NUV range, the observations of <cit.>; for the visible and NIR, the observations from the Fourier-Transform-Spectra Solar Atlas (KP-FTS hereafter) obtained at Kitt Peak Observatory <cit.>; and for the MIR, the transmittance data from the ACE-FTS Solar Atlas <cit.>, recorded by the spectrometer aboard the spacecraft SCISAT-1. The observational search for the stars' spectra was extensive, with special attention being paid to the UV and IR ranges (regions where we had the most prominent changes between models). However, observations with sufficient resolution and high signal-to-noise ratio were difficult to find and even nonexistent in some cases. We describe below the observation data used in this work for the stars Epsilon Eridani, GJ 832, and GJ 581. FUV & NUV. In this range, we included the spectra obtained by the MUSCLES Treasury Survey program[<https://archive.stsci.edu/prepds/muscles/>] <cit.> through the Hubble Space Telescope (HST), mainly the instruments of the Space Telescope Imaging Spectrograph (STIS). We used the medium-resolution E230M grating (STIS-E230M hereafter) and the high-resolution E230H grating (STIS-E230H hereafter) for Epsilon Eridani. In the FUV, we did not find observations of GJ 832 and GJ 581 that resolved such thin lines (∼0.5 Å), and we could only add observations of Epsilon Eridani (STIS-E230M) in one of the examples. In the NUV, for the dM stars GJ 832 and GJ 581, we employed the low-resolution G230L grating (STIS-G230L hereafter). These data were used to construct the respective stellar models in <cit.> and <cit.>. In addition, for the 2853.0 Å line of the dM stars, we also used HST observations obtained by the Cosmic Origins Spectrograph Instrument (COS-G230L hereafter) using the G230L grating. These additional observations were also obtained by the MUSCLES program on the same dates as the previous ones, so they are considered relevant for comparison. VISIBLE. The atmospheric models used in this work were constructed with the visible range continuum level provided by observations at Complejo Astronómico El Leoncito (CASLEO) Observatory. However, they are not included in the figures since they lack sufficient resolution to reproduce the selected lines. Therefore, for Epsilon Eridani, we selected the most recent and highest resolution observation (0102.D-0185(A)) taken with the ESPRESSO spectrograph, part of the Very Large Telescope (VLT) of the European Southern Observatory (ESO). For GJ 832 and GJ 581, we used the latest observations (0104.C-0863(A) and 183.C-0437(A), respectively) made with the high-resolution HARPS spectrograph, installed on ESO's 3.6 m telescope. NIR & MIR. In this range, for both Epsilon Eridani and GJ 581, we use the most recent and highest signal-to-noise ratio observations (090.D-0039(A) and 077.D-0066(A), respectively) from the high-resolution UVES spectrograph, located at the Nasmyth B focus UT2 of the VLT, at ESO. For GJ 832 and GJ 581, we used the latest data (089.C-0440(A) and 094.A-9029(I), respectively) from the FEROS spectrograph, installed on the 2.2 m MPG/ESO telescope at ESO's La Silla Observatory. Except for the Sun, we could not find observational data for the stars with sufficient spectral resolution at MIR. Air-to-vacuum wavelength conversions and Doppler shifts were performed. For the latter, the radial velocity of the object was used according to The SIMBAD astronomical database <cit.>. Like the SRPM, the SSRPM produces spectra calculated with a spectral resolution of R=10^6. For a proper comparison with observations, the calculated spectra were convolved with a Gaussian function to reduce their resolution and match those of the corresponding instrument. In addition, for the spectra observed at line 2853.0 Å, the flux received on Earth was increased by 30% to account for the absorption from the interstellar medium <cit.>. Then, it was converted to flux at the surface of the star, using the relation: F^⋆= π (R^⋆/d)^2, with the distances and radii presented in Table <ref>. Regarding the flux in the other observations, it should be noted that the continuum of the selected observations was normalized to that of the CASLEO observations (and, therefore, to that of the calculated spectra). For this reason, the spectra are not represented on an absolute scale. Instead, they are matched to their continuums in a wider range and normalized to the local maximum. Finally, due to the lack of observations, the lines calculated in the MIR range of the stars were convolved to the ACE-FTS resolution for comparison purposes only. Table <ref> contains details of the observations used and other information of interest. § RESULTS AND DISCUSSION We expand the study carried out in Paper I for the Sun. Particularly, we analyze the effect produced by the change in the atomic parameters of in the atmosphere of stars of different spectral types. It should be noted that when performing the NLTE calculation with the d model, the atomic populations vary as a result of all the changes introduced to the base model (described in Section <ref>). Obtaining a synthetic spectrum that agrees with the observations is important. However, even if there are no appreciable variations in the spectral lines, we can analyze in detail the changes in the energy level populations by inspecting the atmospheric models. Thus, it is possible to perform a comprehensive analysis that can be generalized to other atmospheric conditions, as in the case of stars of different spectral types, metallicities, levels of activity, etc. To this end, we compare the new model d against the base model for each star. In continuity with Paper I, we first present the results produced by the new model d on the solar atmospheric model. As explained in the previous Section, the semiempirical formulas of SEA&VRM are completely discontinued in model d for the calculation of Υ_ij. We replace these values with DW calculations for energy levels from 26 (3s5g ^1G, 57 262.76 cm^-1) up to, and including, level 85 (3p^2 ^1S, 68 275 cm^-1). The main characteristics of the model are summarized in Table <ref>. §.§ Model d versus c, for the solar spectrum In this section, we present the differences found for when comparing the results obtained with the present model d and c from Paper I. In this way, we give continuity to the study previously carried out with an improved calculated model, conformed by more reliable collisional data. Figure <ref> illustrates the population density of for model d as a function of height (up to before the transition region) for certain levels of interest. Particularly, we show the lower and higher levels (and those in between) of the transitions that correspond to the lines we analyze in the following sections. This figure shows the different orders of magnitude in the energy levels populations and the density variation as we move to different regions of the atmosphere (solar, in this case). It can be seen that the lower-lying levels present the highest population density (e.g., in model d: 91.54 % is in level 1, 6.86 % in level 2, and 0.18 % in level 3), and at low altitudes of the atmosphere (≲ 100 km). One should bear in mind that the effect of modifying certain collisional parameters on a given spectral line will depend on the population variation in the levels involved, as well as on the characteristics of the atmosphere in the region where the line is formed. Thus, for the same population change in the corresponding levels, the formation of certain lines will be more affected than others. Moreover, some lines can even be more strongly affected than the same lines in other stars. Examples of these cases will be shown in Section <ref>. Figure <ref> shows four lines as a representation of the different regions of the solar spectrum, calculated with the models: base, c (from Paper I), and the present model d. We selected these lines as they were previously examined in Paper I. It can be seen that models c and d produce identical lines in the NUV (Fig. <ref>a) and VIS (Fig. <ref>b) regions. In contrast, in the NIR (Fig. <ref>c) and MIR (Fig. <ref>d) regions, small absorption differences can be distinguished. These differences are present, with varying amplitudes, throughout the NIR and MIR. We recall that the thermal structure of the solar atmospheric model, which is employed for the 1401c and 1401d calculations, was built assuming the atomic data of the base model (1401). This model features outdated atomic data (as detailed in Paper I) and only includes lines up to 17 000 Å. Nevertheless, the atmospheric models presented in this paper correctly represent the continuum and the most important lines of the stars in question. Hence, a follow-up work should include a new atmospheric thermal structure built with the present atomic model to correctly fit the observations (up to 71 500 Å in our case). §.§ The effect of the model d in the atomic populations It is important to keep in mind that the stellar atmosphere is a coupled system of macroscopic scale effects (e.g., through the radiation field originating at different heights) with atomic scale effects (e.g., through photon absorption and scattering processes that affect the material properties). In NLTE conditions, these effects are described by the system of equations consisting of the radiation transport, statistical equilibrium, and hydrostatic equilibrium equations. Hence, it is possible to understand that causally isolating the effects of modifying an atomic parameter in a spectral line is of great complexity in most cases. However, a detailed analysis can be performed by studying the populations of the energy levels that form the line and the atmospheric conditions in the formation region (as explained in Paper I). To obtain a more general notion of the spectral lines formation due to an atomic model in a certain star, we can compare how the populations are modified for a known model. Without looking at spectral lines, it is possible to obtain a general and complete idea of the differences between models by studying the distributions of the element of interest in its different ionization states, energy levels, or atmospheric heights, as required. In this Section, we present the changes produced in the atomic populations of magnesium with respect to the base model due to the improvements incorporated in the new model d. The indices used for each atmospheric model are detailed at the bottom of Table <ref>. Magnesium distribution through its ionization states. For a general understanding of the behavior of the element in each star, it is essential to know how the element is distributed in its main ionization states. Table <ref> shows the distribution of magnesium in its most abundant states (, , ) and the fraction that forms molecular compounds for the base and d models. It is possible to verify quantitatively, and in agreement with Fig. <ref>, the predominance of (greater than 95 %) in the atmosphere of the Sun (prefix 1401) and Epsilon Eridani (prefix 2227); and the majority of (greater than 72 %) in the cooler atmospheres of GJ 832 (prefix 3385) and GJ 581 (prefix 5812). It is also noteworthy the amount of magnesium that forms molecules in each case. In the Sun and Epsilon Eridani, there are as much as four orders of magnitude less molecular Mg than , while in the models for GJ 832 and GJ 581, the amount of molecular Mg is less than two orders of magnitude smaller than of . If we compare it with the hotter stars, the presence of molecular Mg is at least three orders of magnitude larger. These values become even more relevant when comparing the formation of the same line in different stars. Moreover, in many cases, lines formed in a dM star may end up blended by the molecular bands. Examples of these cases will be shown in Section <ref>. We also analyze the differences in the total population of each star due to the models base and d. By inspecting the “Difference” column of Table <ref>, the behavior of the hottest and coolest stars can be grouped. For the Sun and Epsilon Eridani, model d shows an increase in greater than 2%, correlated with a decrease in and , with respect to each of their total densities. This result suggests a migration of toward , probably through recombination processes. Such processes are favored in model d because it features a more extensive electronic structure (with energy levels closer to the continuum) than the model base. This argument is reinforced by comparing the same models in cooler stars, where for both GJ 832 and GJ 581, the migration occurs from to and . In this case, ionizing collisions can be accounted as responsible for this phenomenon; in model d, these collisions may be favored due to the small energy gaps. distribution through energy levels and atmosphere heights. We study the population changes through the LTE departure coefficient b_L (h) <cit.> and the height-averaged LTE departure coefficient b_L defined in this work. We build this coefficient to get a general idea of the redistribution of the element of interest in its energy levels when the atomic model is being modified, but the atmospheric model remains the same. For a given level L (of N total levels), we calculate the average of log b_L (h) across the atmosphere to obtain an average departure coefficient value representative of the energy level: b_L = ∑_h [log_10 b_L(h)] / N. The average is performed on the logarithms of the b_L (h) so that the statistical weight of very small values (corresponding to the populations calculated in LTE, which are larger than those calculated in NLTE) is not affected in the final parameter. In addition, the averaging over the heights is split into two regions: from the beginning to the height where the temperature minimum (T_min) is located (the so-called photosphere) and from T_min to the beginning of the transition region (defined as 0.95 T_TR, which is known as the chromosphere). In Figs. <ref>, <ref>, <ref>, and <ref>, we show the height-averaged LTE departure coefficient b_L in the two regions described above as a function of the energy level number. These figures correspond to the Sun, Epsilon Eridani, GJ 832, and GJ 581, respectively. The curves in the lower panels show the b_L departure coefficients for each model as a function of height. It is important to note that we have used the same scale for all of these subplots. In all cases, the base model (26 levels) and the new model d (85 levels) are shown. For the four starts and both of the models, the LTE populations are generally larger than those in NLTE (log_10 b<0) in the photosphere. This behavior is expected due to the high density of (Fig. <ref>) and electrons (right panel in Fig. <ref>), which favor a higher rate of e+collisions. The two-region separation described above is characterized by the height where the temperature minimum is reached and illustrated in the lower subplots of Figs. <ref>, <ref>, <ref>, and <ref> with a vertical line. Starting at T_min, certain levels (especially the lower ones) begin to show a much larger b_L (h) (log_10 b > 0, in this case) as it rises in the atmosphere toward the transition region. Comparing the models to each other, we see that the b_L values in the four stars present lower dispersion in the energy levels (upper panels) in model d than in base. This effect can also be seen at different atmospheric heights (note the amplitude of each fringe in the lower panel). We attribute these results to a higher exchange between the energy levels, driven by the inclusion of a larger amount of radiative transitions (shown in Fig. <ref>) as well as the improved electron collisional data used. The new model's level populations are better coupled than those of the base model, leading to smaller population differences. The relatively large differences in b_L between the models for the same level are also strongly related to improvements in other atomic parameters, such as in the oscillator strength values, the photoionization rates, and the broadening parameters. Comparing the models within the stars makes it possible to obtain a first estimate of the result in the spectral lines. Noting the vertical scale between the top panels of Figs. <ref>, <ref>, <ref>, and <ref>, the b_L curves show that the population difference between models is, in most cases, smaller in the lower region of the atmosphere (top left panel) than in the upper region (top right panel); especially for the lower-lying levels (≲ 5). To analyze in detail a particular line in the star, one must study how the ratio of populations changes between the transition levels and the characteristics of the atmosphere in its formation region. A useful tool for this is the contribution function. The following Section will present details and exceptions in the spectral lines. §.§ Spectral lines This Section presents several line profiles calculated with both models, from FUV to MIR. We compare these values with observational data whenever possible, represented with dashed lines and circle symbols to illustrate the density of data points. We select the cases with the most significant change between models to exemplify the effects of the new atomic model of Mg I in the four stellar atmospheric models considered. Table <ref> presents the main characteristics of the spectral lines shown in this work. Analyzing the 2853.0 Å line profile (3s^2 ^1S_0–3p ^1P_1). Considering its formation height, the 2853.0 Å line could be used as a diagnostic of the thermal structure of the stellar chromospheres. Although this transition is well described in the solar case by the base model, this is not true in stars cooler than the Sun. Using the base atomic model, neither the atmospheric model of <cit.> nor those of <cit.> have reached a correct fit for the observed line profiles of GJ 832 and GJ 581, taken with STIS-G230L (using R=500). A notable core emission is obtained in both cases, which is not present in the above-mentioned observations. Figure <ref> compares the 2853.0 Å lines obtained using the base and d models with observations for each star. For the dM stars (bottom panel), in addition to the STIS-G230L observations, we have added the COS-G230L observations (using R=3000). It is important to note that a central emission is found in the observations from STIS-E230H for Epsilon Eridani, and from COS-G230L for GJ 832. In the case of GJ 581, the low resolution of COS-G230L and the noisy signal mask the information, hindering the detection of a core emission. Furthermore, <cit.> recommend using COS as a reference to absolute flux levels, as the STIS instrument always measures lower values than COS, by factors of 1.1-2.4. With this in mind, it could be the case that STIS does not detect the peak correctly. When comparing the models for the Sun and GJ 832, the differences between the synthetic lines calculated with the two models are negligible. In contrast, for Epsilon Eridani and GJ 581, the d model produces a larger intensity at the peaks and in the center. This result shows that the new model cannot solve the incorrect emission presented in the mentioned works. <cit.> suggest the presence of the peak may be related to inaccuracies in the atomic collisional ionization rates and recombination data of . However, several tests performed by us on the atomic model parameters did not show an influence on the emission. In those tests, we were able to observe that the central emission could be avoided, without affecting the formation of other lines, when the upper-level population (3s3p ^1P) is reduced by two orders of magnitude in the atmospheric region of the line formation. Assuming that the emission in the observations is not affected by any other factor, the evidence suggests an excess of at that level or the lack of some opacity medium in the cooler stars at that height. The formation of a line can be studied by analyzing the population of the levels involved in the transition and the atmospheric conditions at the formation region. Figure <ref> shows the contribution function (or attenuated emissivity): <cit.> at the formation region of the line 2853.0 Å for the atmospheres of the different stars, calculated by the base and d models. The line formation mainly occurs around 6000 K for all the stars except for Epsilon Eridani, which presents a more significant contribution at 8000 K. Considering the left panel of Fig. <ref>, the main contribution to this line is in the chromospheric plateau for the Sun and Epsilon Eridani. For the cooler stars GJ 832 and GJ 581, the higher contribution is shifted to the right, closer to the transition region. Another notable aspect shown in Fig. <ref> is the order of magnitude of the f_c values in each star (note that all the subplots present the same scale). The peak of the contribution function is higher in the atmosphere of the cooler stars, which becomes evident considering the local continuum level of each star. The contribution calculated with the two models is very similar for the Sun and Epsilon Eridani, but a considerable difference is obtained for GJ 832 and GJ 581. However, for GJ 832, the main difference is not found at the formation temperature of the line (the peak of the f_c), as occurs for GJ 581, which is in agreement with the synthetic line profiles produced by each model (panels c and d in Fig. <ref>). In the following, we present several lines already shown in Paper I for the Sun. To illustrate the impact of using different atomic models and the conditions of the atmospheric plasma of the star under consideration, we added the line profiles with the higher changes between the models and stars. These are an example of the importance of validating the same atomic model in stars of different spectral types. FUV and NUV. The most significant difference between the lines calculated with the base and d model is obtained in this spectral range. This result is consistent with the population study performed in Section <ref>, that is the lines in this range are generally formed above the temperature minimum of each star. In addition to line 2853.0 Å, the selected lines are 1747.8 Å (Fig. <ref>) and 2026.5 Å (Fig. <ref>), which present notable changes in gf of 17.2% and -27.2%, respectively. These lines are small and have a width of less than 0.5 Å, so it was not possible to find observations where they were clearly noticeable. For line 2026.5 Å, we used observations of the Sun and Epsilon Eridani. Despite that, we could observe different behaviors of the same spectral line between stars and models. In some cases, the effect is more significant in the center (even showing a reversal) and, in other cases, in the wings. Although a detailed analysis should be performed in each case, following the steps described in the example given above, the lines calculated by model d generally present a larger emission than the ones by the base model. This result is the final consequence of the multiple changes made to the base model (detailed in Section <ref>), which had a greater impact on the lines formed in the chromosphere. VISIBLE. Representing this range, Fig. <ref> shows the 6320.7 Å transition in each star. The differences obtained between the two atomic models are mainly due to the update of the radiative data. However, they are only noticeable in the Sun and Epsilon Eridani; in dM stars, this line is blended with molecular bands. In general, the variations between the lines produced by the base and d models are negligible in this spectral range. It is worth mentioning that the agreement with the observations is acceptable in all cases but remarkably good for the d model in the Sun and Epsilon Eridani. NIR. In this range, the lines calculated with both models are generally formed in the photosphere, well below the temperature minimum. However, significant changes between models, such as updating the gf values (Fig. <ref>) or the addition of broadening data (Fig. <ref>), become noticeable when the lower-lying energy levels of the transition are relatively high (≳ 5, see the upper left panel in Figs <ref>, <ref>, <ref>, <ref>), depending on the star. In the above-given examples, model d can generate lower absorption in some cases and higher in others. The changes could even be much smaller between the different stars (top panel versus the bottom panel of Fig. <ref>). The strong interdependence among the population densities across various energy levels poses a challenge in comprehending the causal and predictive effects of implementing the new d model. Nevertheless, considering the lines with the most significant differences between models and comparing them to the available observations, the d model reproduces the data better. An example of these results can be observed in the top panel of Fig. <ref>, where the absorption produced by the new model matches almost exactly the data from KP-FTS in the Sun, and it is closer to the data from UVES for Epsilon Eridani than the line produced by the base model. MIR. In this spectral region, we present three transitions, shown previously in Paper I for the Sun. They are formed well below the temperature minimum, but unlike the lines in the NIR, most occur between high energy levels and, therefore, cannot be reproduced by the base model. As seen in some of the previous examples, lines 33 199 Å (Fig. <ref>), 71 092 Å, and 71 097.4 Å (Fig. <ref>) can be compared in two well-distinguishable groups: the Sun and Epsilon Eridani, and the dM stars. Within each group, the d model produces similar lines, likely due to the characteristic continuum level of each spectral type in this region. The only observations available in this range are those of ACT-FTS transmittance for the Sun. It can be observed that, in both figures, the d model presents a very good match with these measurements. Future observations in this range will allow us to extend our study to other stars and, therefore, improve our atmospheric and atomic models. § CONCLUSIONS Improving the atomic model of for the NLTE populations calculation using the SRPM code libraries produced a solar spectrum in good agreement with the different observations from NUV to MIR, as shown in Paper I (c model). This improvement consisted of updating parameters such as Einstein coefficients, broadening parameters (Stark and van der Waals, mainly), and updating and including photoionization data, energy levels, and spectral lines. The model was extended from 26 to 85 levels, with superlevels from index 55 onward, which allowed 127 new spectral lines to be reproduced, extending the maximum wavelength previously set at 17 000 Å to 71 500 Å. In addition, the electron impact excitation effective collision strengths (Υ_ij) for , which have thus far used the semiempirical formulas of Seaton and van Regemorter (SEA&VRM), were replaced with quantum-mechanical ones. Namely, data from <cit.> calculated via the CCC method were used for the first 25 levels. Multiconfiguration Breit-Pauli DW calculations from Paper I were used for the first time in this species for levels from 26 to 54. These levels mainly participate in the formation of lines in the IR range. The Υ_ij data for transitions involving superlevels from level 55 onward, were completed with SEA&VRM values. Finally, for , the above parameters were also improved, and the number of levels was extended from 14 to 47. These changes allowed us to reproduce 729 new spectral lines, covering a wide range from 850 Å to 630 um. For the present work, we started from the c model and further improved the atomic model. The new model (d) differs from model c in the following ways: regarding the data for the e+collisions, the DW calculation was extended to the levels where SEA&VRM formulas were previously used. Hence, the new model is formed by CCC data in the first 25 levels and our DW calculations from level 26 onward, including the superlevels. Regarding the radiative data, we included 5674 theoretical transitions (3001 term to term), which were also calculated by us. On the Sun, the new model showed minimal differences relative to the c model in the spectral lines that belong to the FUV, NUV, and visible regions, with more significant differences in the NIR and MIR ranges. Considering that the solar atmospheric model of <cit.> was built with an outdated atomic model, a new atmospheric model should be built to fit the observations up to the MIR region correctly. We extended the study of the model made for the Sun in Paper I to three stars cooler than the Sun: Epsilon Eridani, GJ 832, and GJ 581. For this purpose, we studied and compared the populations and spectra between the original, called base, and d models. Comparing the NLTE population obtained with both models, the following was noticed: * For the first ionization states (, , and ) and the molecules containing Mg, it was observed that predominates in the stellar atmospheres of the Sun and Epsilon Eridani, with more than 95%; whereas, in GJ 832 and GJ 581, the presence of is more significant, with more than 72%. In the latter, it was also observed that the amount of Mg forming molecules is more than two orders of magnitude higher than in the Sun and Epsilon Eridani. On the other hand, when analyzing the total population changes between the base and d models, the latter showed an increase in greater than 2% in the Sun and Epsilon Eridani, which is correlated with a decrease in and with respect to their total densities. This suggests a migration of ionized magnesium toward , probably through recombination processes. This process is favored in the d model by a electronic structure closer to the continuum level, which is not featured in the base model. This argument is reinforced by comparing the same models in cool stars where, for both GJ 832 and GJ 581, migration occurs from to and . * For the energy levels, and for different heights in the atmosphere (from the photosphere to the base of the transition region), we used the LTE departure coefficient (b_L) and the LTE departure coefficient averaged over heights (b_L ), constructed by us. From analyzing both parameters, we could see that: * The LTE departure is smaller for heights below T_min than for above this value, in agreement with what is expected. * The d model showed less dispersion than the base model in its departure coefficients, both among the different energy levels and at different atmospheric heights. This result shows a better-coupled model population. * The largest differences in b_L between the models considered occurred after the temperature minimum and at levels approximately from five onward. In addition to the populations, the spectral lines produced by the models were also studied. The impact of a change in the atomic data on a given spectral line depends on the population change in the involved levels and the characteristics of the atmosphere in the region where the line is formed. As a result, for the same population variation in the levels, the formation of some lines may be more affected than others, and even the same line in a different star. The most noteworthy aspects of model d on the spectral lines are as follows: * In the FUV and NUV regions, the most pronounced differences with the base model are observed. The behavior of the same spectral line can vary between stars and atomic models. In some cases, the effect is more pronounced in the center of the line (even showing a reversal), while in others, it is more pronounced in the wings. Overall, it can be seen that model d produces more emissions than the base model. At the 2853.0 Å line, none of the changes made to the atomic data resolve the incorrectly calculated core emission reported by <cit.> and <cit.>. However, it is important to note that observations of GJ 832 obtained by COS-G230L, which date from the same time as the observations used in constructing the models in both works, show central emission, although with much less intensity than calculated by SSRPM. <cit.> state that there was a discrepancy in the flux measurements by COS and STIS, where the cause of the systematic low-flux observed by STIS could be due to an incorrect alignment of the spectrograph slit on the target. Furthermore, observations obtained by STIS-E230H of Epsilon Eridani show that this line has a central emission in this star. This could mean that the line depends, to some degree, on the activity level of the star being studied. If we assume that the emission in the observations is not being affected by another factor, the intensity of the emission could be due to an excess of in that level or the lack of some NUV opacity medium. To avoid the central line emission, maintaining the formation of other lines unchanged, the population of the upper level (3s3p ^1P) in the atmospheric formation region should be reduced by two orders of magnitude. * In the visible, the differences between models were generally negligible. Although there are exceptions, the lines in this range are usually formed in the photosphere, and they are formed by transitions between relatively low levels; consequently, the changes made to the base model do not affect them significantly. The observations found for this range allowed us to further verify the accuracy of our atomic and atmospheric models. * In the IR, it was observed that model d can generate lower absorption in some cases and higher in others, and even the change can be much lower between different stars. In the MIR, we observed that the lines in the Sun and Epsilon Eridani are similar to each other, as are those in the dM stars. This is possibly due to the similarity at each star's continuum level. We did not find observations for the stars; however, for the Sun, the d model can reproduce the observed spectral lines very well. The IR lines strongly depend on collisions, so in order to calculate and use them as indicators (of activity, abundance, etc.), it is essential to have reliable atomic data. Finally, it is important to note that when using the new atomic model of to calculate the atmospheric model for the Sun and the other stars, it could produce spectral lines that differ from the observed spectrum. In this case, the atmospheric models should be corrected to fit the line formation provided by the new atomic model over the entire spectral range covered by it. We sincerely thank Dr. Jeffrey Linsky for his detailed revision of our manuscript since it has helped us improve our work. This work has used the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. We acknowledge to 1995 Atomic Line Data (R.L. Kurucz and B. Bell) Kurucz CD-ROM No. 23. Cambridge, Mass.: Smithsonian Astrophysical Observatory. This research was supported by grants PICT 2018-2895 and PICT 2019-4342 from the Agencia Nacional de Promoción Científica y Tecnológica (MINCyT, Argentina). aa_url
http://arxiv.org/abs/2306.12569v1
20230621210706
Trotter error bounds and dynamic multi-product formulas for Hamiltonian simulation
[ "Sergiy Zhuk", "Niall Robertson", "Sergey Bravyi" ]
quant-ph
[ "quant-ph", "cs.NA", "math-ph", "math.MP", "math.NA", "math.OC" ]
http://arxiv.org/abs/2306.06924v2
20230612075518
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
[ "Andrew Critch", "Stuart Russell" ]
cs.AI
[ "cs.AI", "cs.CR", "cs.CY", "cs.LG", "68T01", "I.2.0" ]
chicago-with-publishers hyphensurl [][#1] shadedquotation [leftmargin=0pt, vskip=0pt] story[subsection] story[3] story#1#2 #3 acACpurple!100!blackcolor=blue!30 srSRgreen!70!blackcolor=green!10 L[1]> p#1 C[1]> p#1 R[1]> p#1 TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI Andrew CritchCenter for Human-Compatible Artificial Intelligence, Department of Electrical Engineering and Computer Sciences, UC Berkeley Stuart Russell^* ====================================================================================================================================================================================================================================================== While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an exhaustive taxonomy of such risks. Many exhaustive taxonomies are possible, and some are useful—particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated. § INTRODUCTION A few weeks ago, a public statement was signed by leading scientists and executives in AI, stating that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” <cit.>. This represents a significant increase in coordinated concern for human extinction risk arising from AI technology, and implies more generally that catastrophic societal-scale risks from AI should be taken as a serious concern. In consonance, just a few days ago US President Joe Biden and UK Prime Minister Rishi Sunak expressed an agreement to “work together on AI safety, including multilaterally”, citing that “last week, the pioneers of artificial intelligence warned us about the scale of the challenge” <cit.>. Meanwhile, in recent years national governments throughout the world have begun to address societal-scale risks from AI. In 2018, Chinese leader Xi Jinping exhorted the attendees of the World AI Conference to “make sure that artificial intelligence is safe, reliable and controllable”. Since then, several AI governance initiatives have emerged in China <cit.>, including specific measures for generative AI services drafted in April of this year <cit.>. In Europe, the proposed European Union AI Act began in large part as a response to concerns that AI systems may pose risks to the safety and fundamental rights of humans <cit.>. In the US, last year the White House issued a Blueprint for an AI Bill of Rights <cit.>, addressing “challenges posed to democracy today” by “the use of technology, data, and automated systems in ways that threaten the rights of the American public.” Harms occurring at the scale of individual persons may be distinguished from harms occurring on the scale of an entire society, which we call societal-scale harms. This distinction can also be seen somewhat in last year's report from the US National Institute of Standards and Technology proposing an “AI risk management framework” <cit.>, which distinguished individual harms from “societal harm” and “harms to a system [...], for example, large scale harms to the financial system or global supply chain”; see Figure <ref>. Harms to individuals and groups should also be considered “societal-scale” when sufficiently widespread. How should societal-scale risks be addressed in technical terms? So far, most research papers addressing societal-scale and existential risks have focused on misalignment of a single advanced AI system. In a recent blog post, <cit.> lays out a clear and concise logical argument for this case, entitled “How Rogue AIs may Arise”. However, while misalignment of individual systems remains a problem, it is not the only source of societal-scale risks from AI, and extinction risk is no exception. Problems of racism, misinformation, election interference, and other forms of injustice are all risk factors affecting humanity's ability to function and survive as a healthy civilization, and can all arise from interactions between multiple systems or misuse of otherwise “aligned” systems. And, while <cit.> has offered the single-human/single-machine framing as a “model for the relationship between the human race and its machines, each construed monolithically,” this monolithic view of AI technology is not enough: safety requires analysis of risks at many scales of organization simultaneously. Meanwhile, <cit.> together have called for a better articulation of concrete risks from AI, including extinction risk. In this paper, we expand our focus somewhat from the implicit assumption that societal-scale harms must result from a single misaligned system, and begin to analyze societal-scale risks in accordance with the decision tree in Figure 2 below: Safety engineers often carry out a fault tree analysis <cit.> as a way to ensure they have covered all possible failures. The root of a fault tree is the condition to be avoided and each branch tests some condition. As long as the branches from each node are logically exhaustive, the leaves necessarily cover all possible circumstances. Typically branches test whether a given subsystem is working correctly or not, but can also test more general conditions such as the ambient temperature or whether the system is undergoing testing. The decision tree in Figure <ref> above follows the same basic principle to produce an exhaustive taxonomy. Exhaustiveness of a taxonomy is of course no guarantee of usefulness. For example, an analysis based on whether the day of the month is a prime number would yield an exhaustive two-leaf taxonomy while providing zero analytical benefit. A taxonomy is only useful to the extent that it reveals new risks or recommends helpful interventions. To that end, we have chosen an exhaustive taxonomy based on accountability: whose actions led to the risk, were they unified, and were they deliberate? Such a taxonomy may be helpful because it is closely tied to the important questions of where to look for emerging risks and what kinds of policy interventions might be effective. This taxonomy in particular surfaces risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are needed. Many other taxonomies are possible and should be explored. A previous taxonomy of <cit.> also examined sources of AI risk arising intentionally, by mistake, or from a system's environment, either pre-deployment or post-deployment. While useful, Yampolskiy's taxonomy was non-exhaustive, because it presumed a unified intention amongst the creators of a particular AI system. In reality, no well-defined “creator's intent” might exist if multiple AI systems are involved and built with different objectves in mind. §.§ Related work and historical context Historically, the risk posed to humanity by advanced AI systems was first recognized in fiction, by authors such as Samuel <cit.> and Karel <cit.>. Later, warnings were also expressed by computer scientists such as Alan <cit.> and Norbert <cit.>, with Wiener pinning risk on the difficulty of ensuring “the purpose put into the machine” would be aligned with actual human preferences, and I. J. <cit.> highlighted the additional threat of rapid, recursive self-improvement leading to a loss of control. In this century, many have examined existential risk from superintelligent machines <cit.> and various technical approaches have been suggested to address it, particularly in the area of AI alignment <cit.>. § TYPES OF RISK Here we begin our analysis of risks organized into six risk types, which constitute an exhaustive decision tree for classifying societal harms from AI or algorithms more broadly. Types 2-6 will classify risks with reference to the intentions of the AI technology’s creators, and whether those intentions are being well served by the technology. Type 1, by contrast, is premised on no single institution being primarily responsible for creating the problematic technology. Thus, Type 1 serves as a hedge against the taxonomy of Types 2-6 being non-exhaustive. §.§ Type 1: Diffusion of responsibility Automated processes can cause societal harm even when no one in particular is primarily responsible for the creation or deployment of those processes <cit.>, and perhaps even as a result of the absence of responsibility. The infamous “flash crash” of 2010 is an instance of this: numerous stock trading algorithms from a variety of companies interacted in a fashion that rapidly devalued the US stock market by over 1 trillion dollars in a matter of minutes. Fortunately, humans were able to intervene afterward and reverse the damage, but that might not always be possible as AI technology becomes more powerful and pervasive. Consider the following fictional story, where the impact of unemployment on crime rates <cit.> is exacerbated by a cycle of algorithmic predictions: pessimismSelf-Fulfilling PessimismStory 1a: Self-Fulfilling Pessimism. Scientists develop an algorithm for predicting the answers to questions about a person, as a function of freely available and purchasable information about the person (social media, resumes, browsing history, purchasing history, etc.). The algorithm is made freely available to the public, and employers begin using the algorithm to screen out potential hires by asking, “Is this person likely to be arrested in the next year?” Courts and regulatory bodies attempt to ban the technology by evoking privacy norms, but struggle to establish cases against the use of publicly available information, so the technology broadly remains in use. Innocent people who share certain characteristics with past convicted criminals end up struggling to get jobs, become disproportionately unemployed, and correspondingly more often commit theft to fulfill basic needs. Meanwhile, police also use the algorithm to prioritize their investigations, and since unemployment is a predictor of property crime, the algorithm leads them to suspect and arrest more unemployed people. Some of the arrests are talked about on social media, so the algorithm learns that the arrested individuals are likely to be arrested again, making it even more difficult for them to get jobs. A cycle of deeply unfair socioeconomic discrimination begins. In the story above, a subset of humanity becomes unfairly disempowered, both economically and legally. It is possible, we claim, for all of humanity to become similarly disempowered. How? Consider that many systems of production and consumption on Earth currently operate entirely without human involvement, while producing side effects for humans and other life. For instance, algal blooms consume energy from the sun and materials from the surrounding ocean, and as a side effect they sometimes produce toxins that are harmful to other sea life as well as human swimmers. It is important to consider the possibility that artificially intelligent systems, in the future, could also sustain fully self-contained loops of production and consumption that would yield negative side effects for humanity. The following diagram illustrates how a few industries, if fully automated through AI technology, could operate in a closed loop of production (and consumption) without any other inputs: Could such a self-contained “production web” ever pose a threat to humans? One might argue that, because AI technology will be created by humanity, it will always serve our best interests. However, consider how many human colonies have started out dependent upon a home nation, and eventually gained sufficient independence from the home nation to revolt against it. Could humanity create an “AI industry” that becomes sufficiently independent of us to pose a global threat? It might seem strange to consider something as abstract or diffuse as an industry posing a threat to the world. However, consider how the fossil fuel industry was built by humans, yet is presently very difficult to shut down or even regulate, due to patterns of regulatory interference exhibited by oil companies in many jurisdictions <cit.>. The same could be said for the tobacco industry for many years <cit.>. The “AI industry”, if unchecked, could behave similarly, but potentially much more quickly than the oil industry, in cases where AI is able to think and act much more quickly than humans. Finally, consider how species of ants who feed on acacia trees eventually lose the ability to digest other foods, ending up “enslaved” to protecting the health of the acacia trees as their only food source <cit.>. If humanity comes to depend critically on AI technology to survive, it may not be so easy to do away with even if it begins to harm us, individually or collectively. For an illustration of how this might happen, consider the story below: productionProduction WebStory 1b: The Production Web. Someday, AI researchers develop and publish an exciting new algorithm for combining natural language processing and planning capabilities. Various competing tech companies develop “management assistant” software tools based on the algorithm, which can analyze a company's cash flows, workflows, and communications to recommend more profitable business decisions that also yield positive PR and networking opportunities for managers. It turns out that managers are able to automate their own jobs almost entirely, by having the software manage their staff directly. Software tools based on variants of the algorithm sweep through companies in nearly every industry, automating and replacing jobs at various levels of management, sometimes even CEOs. One company develops an “engineer-assistant” version of the assistant software, capable of software engineering tasks, including upgrades to the management assistant software. Within a few years, it becomes technologically feasible for almost almost any human job to be performed by a combination of software and robotic workers that can operate more quickly and cheaply than humans, and the global job market gradually begins to avail of this possibility. A huge increase in global economic productivity ensues. Despite the massive turnover in the job market, average quality of life also improves in almost every country, as products and services become cheaper to produce and provide. Most job losses come with generous severance packages, sometimes enough for a full retirement. Companies closer to becoming fully automated achieve faster turnaround times, deal bandwidth, and creativity of business-to-business negotiations. Some companies idealistically cling to the idea the human workers must remain integral to their operations, however, they quickly fall behind because they simply can't provide products and services as cheaply as their fully automated competitors. Eventually, almost all companies either fail and shut down or become fully automated. An interesting pattern of trade begins to emerge between a conglomerate of automated companies in the materials, real estate, construction, utilities, and freight and logistics industries, along with a new generation of “precision manufacturing” companies that can use robots to build almost anything if given the right materials, a place to build, some 3d printers to get started with, and electricity. Together, these companies sustain an increasingly self-contained and interconnected “production web” that can operate with no input from companies outside the web, while providing an impressive swath of cheap products and services to the rest of the world. The objective of each company in the production web could loosely be described as an amorphous combination of profitability, size, market share, and social status, all learned by emulating the decision-making of human business leaders. These objectives are implemented as large and opaque networks of parameters that were tuned and trained to optimize the inferred objectives of human business leaders during the early days of the management assistant software boom. At this point, the story hits an inflection point that is difficult for the characters in it to perceive. In short, the world begins to change in a way that renders the production web a harmful rather than helpful presence, but the change happens so gradually that collective action against it is difficult to precipitate, and eventually the change is irreversible. The details of this change — or rather, just one way it could play out — constitute the remainder of the story... First, human leaders in more conservative jurisdictions struggle to keep track of how the production web companies are producing so many products and so cheaply, and without easily accessible human-legible paper trails, auditing attempts glean little insight. As time progresses, it becomes increasingly unclear—even to the concerned and overwhelmed Board members of the fully mechanized companies of the production web—whether these companies are serving or merely appeasing humanity. We eventually realize with collective certainty that the companies have been trading and optimizing according to objectives misaligned with preserving our long-term well-being and existence, but by then their facilities are so pervasive, secure, and necessary for serving our basic needs that we are unable to stop them from operating. With no further need for the companies to appease humans in pursuing their production objectives, less and less of their activities end up benefiting humanity. Eventually, human-critical resources (e.g., arable land, drinking water, atmospheric oxygen) are depleted and climate conditions are compromised at an alarming rate, threatening humanity’s very existence. In the end, humanity is faced with a difficult collective action problem: deciding when and in what way to physically stop the production web from operating. In the best case, a shutdown is orchestrated, leading to decades of economic dislocation, deprivation, and possibly famine. In the worst case, military-level conflict emerges between humanity and the fully automated companies, or humanity simply perishes before mounting a coordinated defense. Analysis. In the story above, rapidly operating institutions tended toward trade and interaction with other rapidly operating institutions, thus yielding a collective tendency towards “closing the loop” on production and consumption by fully automated companies. Abstractly, it may be summarized as follows: * AI technology proliferated during a period when it was beneficial and helpful to its users. * There was a gradual handing-over of control from humans to AI systems, driven by competitive pressures for institutions to (a) operate more quickly through internal automation, and (b) complete trades and other deals more quickly by preferentially engaging with other fully automated companies. * Humans were not able to collectively agree upon when and how much to slow down or shut down the pattern of technological advancement. * Once a closed-loop “production web” had formed from the competitive pressures in 2(a) and 2(b), the companies in the production web had no production- or consumption-driven incentive to protect human well-being, and eventually became harmful. What can be done? What kinds of checks and balances are needed to keep stories like these entirely in the domain of science fiction? For one thing, when the activities of multiple agents are collectively giving rise to risks or harms, new behavior norms for the agents are needed to steer them collectively away from the harmful pattern. Indeed, both stories above included shortfalls in regulatory efforts. To prevent such scenarios, effective regulatory foresight and coordination is key. Agriculture provides an interesting precedent for regulation. Historically, agricultural products—like algorithms—have had the potential to be replicated and misused, leading to societal harm, including harms not easily perceptible by any one individual. For instance, we now know that small amounts of lead in food can yield a slow accumulation of mental health problems, even when the amount of lead in any particular meal is imperceptible to an individual consumer. Widespread degradation of mental health can lead to many other large-scale harms, including the breakdown of institutions that depend on mentally healthy constituents to function. Today, this problem is avoided through regulation. The United States Food and Drug Administration (FDA) relies on the rigorous classification and testing of food and drugs to protect the public from health risks, as well as on stringent requirements for grocery stores and pharmacies to maintain legible records. By contrast, there is currently no such pervasive and influential regulatory body for algorithms in the United States. At present, tech companies follow their own internal policies for protecting users, with little external oversight of interactive algorithms and their effects on people. However, there has been some discussion in the Senate regarding the creation of a federal agency for this purpose <cit.>. Also, the National Institute of Standards and Technology (NIST) is presently compiling non-regulatory standards for AI technology, but these are suggestions rather than legally enforced requirements. The General Data Protection Regulation (GDPR) is enforceable in the EU, and California's Consumer Privacy Act (CCPA) applies to protect consumers in California; however, unless such regulations are adopted more widely, they may simply serve to determine where harmful algorithms operate, rather then whether. How would we even begin to classify and test algorithms for regulatory oversight, the way foods are classified and tested? Many approaches could make sense here. One that stands out is a language called UML (Unified Modeling Language) specifically designed for documenting and diagramming the architecture of IT systems at an abstract level, including workflow interactions with humans. Perhaps a UML-like language could be used to establish standards for classifying and regulating algorithms. Thinking purely quantitatively, oversight could also be triggered entirely on the basis of large computational resource expenditures. Companies could be required to produce auditable reports of how they use computing and communications resources, just as they are already required to report on their usage of money or controlled substances <cit.>. In testimony to the U.S. Senate Committee on Armed Services in April of this year, RAND CEO Jason Matheny recently advocated for “Defense Production Act authorities to require companies to report the development or distribution of large AI computing clusters, training runs, and trained models (e.g. >1,000 AI chips, >1027 bit operations, and >100 billion parameters, respectively)” <cit.>. Auditing is discussed more under risk Type 4 below, specifically with regards to the interpretability of AI algorithms. More than individual company audits will be necessary to prevent large-scale interactions between diffuse collections of companies from leading to negative externalities for society. Humanity, collectively, will eventually need to enable one or more regulatory institutions to view the interaction of computing and communications systems at a global scale, to detect if and when those global interactions are beginning to lead the world down a harmful path that no individual company might be responsible for preventing (like in the stories), or even capable of noticing. Who or what oversight bodies should be privy to such a “global report of computing and communications activity”? It might be the entire public, one or more government agencies, an international NGO, or a professional standards organization. In all cases, help will be needed from domain experts to assess potential risks that could arise from the worldwide aggregate behavior of algorithms. A new discipline that essentially unifies control theory, operations research, economics, law, and political theory will likely be needed to make value judgements at a global scale, irrespective of whether those judgements are made by a centralized or distributed agency. Where could we begin to develop such a unified discipline? At a technical level, one might start by developing simplified mathematical models of the sociotechnical context in which AI algorithms operate, perhaps leaning on UML or another systems-level modelling language for inspiration. A few micro-scale examples of this are given under Type 4; as a macro-scale example, perhaps a UML-like model of the global economy in the story above might yield Figure <ref> as a sub-diagram of a larger production diagram. In summary, the following three problems need to be addressed: * Regulatory problem: Algorithms and their interaction with humans will eventually need to be regulated in the same way that food and drugs are currently regulated. * Oversight problem: One or more institutions will be needed to oversee the worldwide behavior and impact of non-human algorithms, hereby dubbed “the algorithmic economy.” This should include assessments of whether humanity retains the ability to shut down or redirect the algorithmic economy, with an eye toward the risk of self-contained production webs developing over time. * Technical problem: A new technical discipline will be needed to classify and analyze the sociotechnical context of algorithms for the purposes of oversight and regulation. §.§ Type 2: “Bigger than expected” AI impacts The pessimism and production stories above already illustrate how the scope of actions available to an AI technology can be greatly expanded when the technology is copied many times over, or modified relative to the likely intentions of its initial creators. However, impact on an unexpectedly large scale can occur even if only one team is responsible for creating the technology. The following story illustrates how a new AI technology can yield a negative societal-scale impact as a result of its developers failing to adequately understand the mechanism by which its societal-scale impact would occur: hatespeechHate Speech LeakStory 2a: Hate Speech Leak. A social media company decides to develop a content moderation tool for flagging instances of hate speech. For testing purposes, AI researchers train a natural language text generator to produce a large volume of artificial hate speech, which turns out to be quite creative in the hateful arguments it generates, and helps the company to develop very robust hate-speech detection and flagging algorithms. But one day the hate speech corpus is accidentally leaked onto the Internet, yielding a highly negative global impact where persons looking to incite hatred begin re-using its statements as “scientifically proven insults.”[The Microsoft Tay fiasco shows how much trouble AI-generated hate speech can cause on the internet. However, Tay was actually intended to interact with all of society and was released intentionally; thus, Tay itself might be better viewed as an instance of Type 3 problems or perhaps Type 5. Safety issues arising more recently with Microsoft Bing Chat may be viewed similarly.] Obviously, researchers will exercise some level of caution to prevent AI systems and their products from “getting out” unexpectedly; otherwise Chernobyl-like disasters can result from failures to contain extremely impactful systems and data. But there are more subtle ways in which an AI technology could end up with a larger scale of impact than its creators anticipated. For instance: advicebotIndulging Advice BotStory 2b: The Indulging Advice Bot. A chat-bot is created to help users talk about stressors in their personal life. A 6-month beta test shows that users claim a large benefit from talking to the bot, and almost never regret using it, so an open source version of the bot is made available online, which can be downloaded and used for free even without an internet connection. The software “goes viral”, attracting many more users than expected, until over 50% of young adults aged 20 to 30 become regular users of the bot's advice. When the bot gives the same advice to multiple members of the same friend group, they end up taking it much more seriously than in the beta tests (which didn't recruit whole groups of friends). As a result of the bot's frequent advice to “get some distance from their stressors”, many people begin to consider dropping out of college or quitting their jobs. Ordinarily this would be a passing thought, but finding that many of their friends were contemplating the same decisions (due to the influence of the bot), they feel more socially comfortable making the change. Many groups of friends collectively decide to leave their jobs or schools. Public education suffers, and unemployment rates increase. Analysis. In each of the above stories, a technology turns out to have a much larger impact on society that expected, and that impact turns out to be bad. In the first story, the release of the technology is an accident, whereas in the second story the release is intentional but the manner and scope of adoption was unexpected. Professional standards and ethics have a major role to play in encouraging AI developers to predict and avoid outcomes like this. There is also technical work to be done: AI systems should be developed with some ability to predict whether actions will be “high impact” or “low impact” <cit.>, and to avoid having a greater impact than intended, especially on variables outside the domain of the system's training and expertise. A variety of impact control concepts have been considered using various definitions of impact, such as by <cit.>. These are preliminary and haven’t been tried much in real-world settings, and in particular have not been applied to natural language systems. Impact restrictions. We may wish to treat one or more protected features of society as outside the domain of an AI system's allowable influence. For instance, in the advicebot story above, the significant increase in unemployment rates was very different from and more significant than the kind of impact its creators expected. One way to restrict the impact of an AI system might be to have the system predict and avoid disallow significant impacts outside of its allowed domain. An AI system could predict and control its own impact on the world in either a “model-based” for “model-free” fashion. Prediction and control of a quantity is said to be model-based if it is based on a representation of the world, internal to the system. Thus, model-based impact control could use one of the above definitions of impact directly to predict the “impact level” of various actions before making a choice. By contrast, model-free control of a quantity is learned from past experiences of what affected the quantity, often in settings with the AI system's designers do not know how to model the system's environment. Model-free impact control could be implemented if an impact metric is calculated by a trusted source external to the AI system, and provided as a signal for the system to observe, predict, and control. Such solutions might resemble social relationships where people or institutions define boundaries for other agents to respect, without having to explain treasons to those agents (e.g., “mind your own business”, “get out of my backyard”). Human professionals with a heightened capability to influence others—such as doctors, lawyers, and therapists—typically undergo significant training and enculturation to learn what is or is not appropriate for them to influence, and this understanding often depends on at least an amateur knowledge of how the world works outside their field. As such, it may be a very challenging learning problem for advanced AI systems to reliably limit their own impact. Scope sensitivity. Ideal behavior for an AI system is a function of how many copies of the system have been implemented, and where. For instance, if a bot convinces one person to go to Central Park for a lunch break, a relaxing walk results; but if a million copies of the bot convince a million people to go there all at once, the result is a terribly crowded park. So, a new AI technology needs to be designed to act differently—and typically more conservatively—based on the number of the instances of the technology that are running, and their context. In other words, new AI technologies need to be sensitive to the scale on which they are being applied. This at least requires each implementation to know roughly how many other implementations are out there, which requires at least a minimal degree of communication between implementations, perhaps mediated by human overseers responsible for limiting the scope of a beta test. Without scope sensitivity, the impact of a new AI technology could be much larger than expected, simply as a result of an unexpected degree of popularity. §.§ Type 3: “Worse than expected” AI impacts Oftentimes, the whole point of producing a new AI technology is to produce a large (usually positive) impact on society. Therefore, a major category of societal-scale risk arises from large, well-intentioned interventions that go wrong. The following story illustrates how a messaging assistant technology could learn to cause its users to distrust each other, while the company that creates it has no intention to create that effect: emailCynical Email HelperStory 3a: The Cynical Email Helper. A tech giant with over a billion users releases a new “email helper” feature that reads a user's email and suggests full email responses for the user to send, sometimes multiple paragraphs in length. However, many users struggle to understand the helper’s reasoning behind its messages, so a new feature is added that privately explains to the user why the message might be a good idea. A typical display to the user looks like this: Message from Julia: “Hey, want to come to my party at 8:00 tomorrow?” Suggested response: “Sure, Julia, I'd love to come to your event! Is it alright if I arrive a bit late, at 9:00?” Reason for response: Remember you have plans to meet with Kevin from 5:30 to 8:30, although there's no need to mention that detail to Julia; she might be jealous or offended. The helper is programmed to improve over time, from positive feedback whenever the user chooses to send the suggested message. Ironically, the helper gets more positive feedback when it makes the user more nervous about the situation, such as by pointing out ways the counterparty could get angry at the user. This pattern causes users to feel like their helper is supporting them through a (purportedly) tricky social situation. So, the helper learns to gradually include more and more advice that causes users to keep secrets and fear offending each other. As a result, a large fraction of the population becomes gradually more anxious about communicating with others in writing, while also becoming increasingly easy to offend as forthright communication styles become rare. It takes years for everyone to notice the pattern, but by that time many people have become excessively distrustful of others. The creators of the technology wish they had included a user experience question like “how are you feeling about your email today?”, to measure how their product might be affecting people separately from measuring how much people use it. In the above story, the tech company did not design the harmful behavior; it was learned. Such failure modes are not limited to producing psychological harm; consider the following variant of the same story, where the harm is institutional rather than psychological: mediatorCorrupt MediatorStory 3b: The Corrupt Mediator. A new company that calls itself Mediation.AI[All company names in the stories of this article are purely fictional. If any such names have also been used by real companies, it is entirely by coincidence, not reference.] releases natural language tools for helping mediate conflicts between large institutions that have overwhelming amounts of communication to manage during negotiations. Many governments of neighboring jurisdictions and states begin using the software to negotiate laws and treaties. Like in the previous story, the tool is programmed to learn strategies that increase user engagement, as a proxy for good performance. Unfortunately, this leads to the software perpetually resolving short-term disputes that relieve and satisfy individual staff members involved in those disputes, while gradually creating ever more complex negotiated agreements between their governments, rendering those governments increasingly dependent on the software to handle foreign affairs. International trade relations begin a long and gradual decline, which no one country is able to negotiate its way out of. Frequencies of wars gradually also increase due to diminished incentives to cooperate. Analysis. The previous two stories illustrate how using a technology frequently is not the same as benefiting from it. To begin paying more direct attention to benefit, let us consider the relationship between one or more human stakeholders and one or more AI systems to whom the humans are delegating tasks or responsibilities, and whether the humans benefit from that relationship. Single/single delegation. The problem of ensuring that a single AI system will benefit (i.e., serve the interests of) a single user is called “user/agent value alignment” <cit.>, or more recently, “AI alignment” <cit.>. Single/single delegation problems raise numerous subtle “alignment” issues, such as: * deception: if the system's learning objective is defined entirely by user feedback, it might achieve that objective partly by tricking the user into thinking it's more helpful than it is; * racketeering: if the system's learning objective increases with user engagement, it might learn to achieve that objective partly by racketeering, i.e., creating novel problems for the user that increase the user's reliance on the system (e.g., debilitating the user, or raising others' expectations of the user). * self-preservation: in particular, the system has an incentive to prevent the user from turning it off, which it might achieve by deception or racketeering. Indeed, reinforcement learning systems can in principle learn to manipulate the human minds and institutions in fairly arbitrary (and hence destructive) ways in pursuit of their goals <cit.> <cit.> <cit.>. Regulations against false advertising and racketeering laws are important historical examples of how principles of free speech have sometimes been balanced against the negative externalities of widespread deception and manipulation. Sometimes, user privacy can help protect them from certain forms of manipulation. However, even model-free learning techniques can control hidden state variables in their environments, as demonstrated by any reinforcement learning algorithm for solving unknown POMDPs. It is possible to somewhat mitigate these issues with reinforcement learning by designing the AI system to solve an assistance game with the human (sometimes previously known as a CIRL game) <cit.>. An assistance game is a two-player game between the human and the AI system. The system's objective is to serve the human's preferences, but the system is uncertain about those preferences, and it learns about them over time from the human's behavior. This problem framing helps to some degree with avoiding deception, racketeering, and self-preservation. For instance, deceiving the user distorts the system's own access to information about its subjective, which is suboptimal from the system's perspective. Racketeering and self-preservation at the user's expense are similarly poor strategies within the assistance game framework. However, malfunctions can still occur if the parameters of the assistance game are misspecified <cit.>. Moreover, assistance games in their simplest form do not address the issue that the user's preferences themselves could be changed by the technology <cit.>. While some users might endorse their core values being changed by an AI system, others might find the idea horrific. Appropriately restricting the impact of AI technologies on the human mind poses a significant challenge, particularly because AI technologies are often used primarily to provide information for human consumption. If all that wasn't complicated enough, protecting society as a whole from large-scale intervention malfunctions is a much more complex game than serving a single human, as Stories 3a and 3b above both serve somewhat to illustrate. Multi/single delegation. Any plan for ensuring an AI system will benefit society will need to account for the fact that the system's user(s) and creator(s) will simultaneously aim to derive particular benefits from its existence. This suggests a game with at least four players: the system itself, its creator(s), its user(s), and some representation of the rest of society as one or more players. Moreover, some AI systems might be explicitly designed to serve many stakeholders at once, such as an office assistant system, or a system designed to aid in public policy decisions. We call this situation multi/single delegation: multiple human stakeholders depending on a single AI system to fulfill a purpose. Multi/multi delegation. There is always the possibility that many separate optimization processes (either AI systems, or human-AI teams) can end up in a Prisoner's Dilemma with each other, each undoing the others' efforts by pursuing its own. Thus, in the end we will need a good formalism in which many stakeholders can be served simultaneously by many AI systems, i.e., multi/multi delegation. Such a formalism would no doubt aid in addressing the other problems raised in this article as well. §.§ Type 4: Willful indifference All of the potential harms in the previous sections are made more likely if the creators of AI technology are unconcerned about its moral consequences. Even if some employees of the company detect a risk of impacts that's bigger than expected (Type 2) or worse than expected (Type 3), it may be quite difficult to institute a change if the company is already profiting greatly from its current strategy, unless there is some chance of exposure or intervention from outside the company to motivate a reform. The following story illustrates this: abtestingHarmful A/B TestingStory 4: Harmful A/B Testing. A tech company called X-corp uses an automated “A/B testing” system that tries out new parameter values to expand its user base. Like in the [story:mediator]Corrupt Mediator story, their system learns that they can get more users by causing their users to create problems for each other that only X-corp's tools can solve, creating a powerful network effect that rapidly expands X-corp's user base and earns X-corp a lot of money. Some concerned X-corp employees complain that they have inadequate checks in place to ensure their A/B development process is actually benefiting their users, but it never seems to be a convenient time to make major changes to the company's already profitable strategy. One employee manages to instigate an audit from a external non-profit entity to assess the ethics of X-corp's use of AI technology. However, X-corp's A/B testing system is opaque and difficult to analyze, so no conclusive evidence of ethical infractions within the company can be identified. No regulations exist requiring X-corp's A/B testing to be intelligible under an audit, and opponents of the audit argue that no technology currently exists that could make their highly complex A/B testing system intelligible to a human. No fault is found, and X-corp continues expanding and harming its user base. Analysis. This story spells out how our collective strategy for preventing societal harm must go beyond merely providing methods that allow the building of safe and beneficial AI technology. We must also establish these methods as worldwide industry standards and norms that cannot be ignored. Industry norms are usually maintained by professional codes of conduct, regulatory bodies, political pressures, and laws. For instance, technology companies with large numbers of users could be expected to maintain accounts of how they are affecting their users' well-being. This is primarily not a technological challenge, but a rather, a challenge of establishing a new social contract where, like food and drug companies, companies who deploy interactive algorithms must be continually examined for their impact upon people and society. Academically, this is a matter for social scientists who study the impact of technology. However, there are also opportunities for AI to assist humans in regulating AI technology <cit.>. Ensuring AI systems make decisions in a manner that is interpretable to humans will be key to this objective, and will limit opportunities for morally indifferent creators to “look the other way” when their systems are liable to cause societal-scale harm. Interpretability techniques. A successful audit of a company's business activities requires the company's personnel to understand those activities. When those activities are automated with AI technology, the actions of the AI systems must themselves be interpretable by company personnel and explainable to outsiders. “Black-box” machine learning techniques, such as end-to-end training of the learning systems, are so named because they produce AI systems whose operating principles are difficult or impossible for a human to decipher and understand in any reasonable amount of time. Hence, alternatives or refinements to deep learning are needed which yield systems with comparable performance while being understandable to humans. This requires attention to the amount of information that can be consumed and interpreted by a human <cit.>. <cit.> argues further that “trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society”. Subsequently, <cit.> provides a technical argument that very little performance may need to be sacrificed to drastically improve interpretability. Work in this direction could be very useful to maintaining accountability for companies engaged in highly automated business activities. §.§ Type 5: Criminal weaponization It's not difficult to envision AI technology causing harm if it falls into the hands of people looking to cause trouble, so no stories will be provided in this section. It is enough to imagine an algorithm designed to pilot delivery drones that could be re-purposed to carry explosive charges, or an algorithm designed to deliver therapy that could have its goal altered to deliver psychological trauma. What techniques exist for preventing AI systems from being intentionally modified for harmful purposes? As an industry-ready example, suppose AI researchers have developed a scene description tool D:scenes → paragraphs, which takes as input an image of a potentially complex scene, and returns a paragraph of text that accurately describes what is happening in the scene. Now suppose we want to release the tool for public use. However, to prevent it from being used freely to target or study individuals, we wish to block public users of D from using it to describe certain types of scenes, such as a scene containing a person, or a scene that has been digitally altered (such as to add or remove a person). A naive approach might be to train a new version, D', on data that contains no unacceptable scenes, and hope that the trained algorithm would perform poorly on queries to describe unacceptable scenes. But, this hope might not pan out if the learned function turns out to generalize well to unacceptable examples. And, if the training process is very computationally expensive, it won't be easy to repeat. A better approach would be to use program obfuscation. Before releasing D, we could train another (simpler) function A: scenes →{true,false} for detecting whether a scene image is acceptable for the software to describe. We'd then write a new function, SD = : images → labels, like this: Of course, it may be relatively easy for a hacker to “take apart” a compiled version of SD, and run the subroutine without the check. This is why we need to “obfuscate” SD. An “obfuscation” function Ob: programs → programs returns a new program Ob(SD) to be released instead. The (compiled) code of Ob(SD) is mangled and so that it cannot be easily “taken apart”, but it computes the same input/output function as SD. Historically, there have been many ad hoc obfuscation methods employed by software companies to protect their intellectual property, but such methods have a history of eventually being broken <cit.>. To prepare for a future with potentially very powerful AI systems, we need more rigorously proven methods. Luckily, there has been recent progress in cryptography developing theoretical foundations for a technique called indistinguishability obfuscation (IO) <cit.>, which can be used to implement Ob for the purpose above <cit.>. While these methods are currently too inefficient to be practical, this area of work seems promising in its potential for improvements in speed and security. This leaves open a rich domain of problems relevant to AI and cryptography: * Can IO techniques be made more efficient for obfuscating a specific class of AI-relevant programs, such as neural networks or bounded-depth probabilistic programs? * Can new or existing IO techniques be shown to work under more secure cryptographic assumptions? While a purely cryptographic question, a positive answer to this would increase our credence that IO techniques will not be broken by AI systems in the future. §.§ Type 6: State weaponization Tools and techniques addressing the previous section (weaponization by criminals) could also be used to prevent weaponization of AI technologies by states that do not have strong AI research labs of their own. But what about more capable states? The elephant in room here is that AI can be used in war. Some argue that, ideally, mechanical drones could be pitted against one another in casualty-free battles that allow nations to determine who would win a war of lethal force, without having to actually kill any human beings. If taken no further, this would be a major improvement over current warfare practices. However, these capabilities are not technologically far from allowing the mass-killing of human beings by weaponized drones. Escalation of such conflicts could lead to unprecedented violence and death, as well as widespread fear and oppression among populations that have been targeted by mass killings. It may seem that the only action computer scientists can take to prevent such outcomes is to refuse participation in the design of lethal autonomous weapons. Is there anything positive we can contribute to the age-old problem of world peace? Although it may be a long shot, it's conceivable that AI technology could be employed to eliminate or reduce incentives for states to engage in war. For instance, AI could make it easier to share resources, by brokering mutually agreeable peace treaties. Or, technical solutions for sharing control of powerful AI systems could help to prevent wars from emerging over how those AI systems should be used. While any given attempt to use AI technology to resolve global conflicts is unlikely to succeed, the potentially massive upside makes this possibility worth exploring. For instance, there are currently numerous open technical problems in how to approach AI-assisted negotiation, and the examples below are far from exhaustive. Mediation tools. Consider two countries that would benefit from a peace treaty or trade agreement, but are struggling to reach agreement on the terms. Or, imagine two friends who can't agree on where to have dinner. As a prerequisite for an AI system to propose a compromise solution in such a scenario, we need AI technology capable of formulating a plan that one party finds acceptable and the other can understand. For PhD-level work in this area, consider the following cooperative online game between Alice (human), Bob (human), and an AI assistant Medi. Bob has access to a video game screen and controller, but the goal of the game hidden from him. Alice is on the other side of the world, and can see the goal, but doesn't have access to the controller. Alice is allowed to convey messages to Bob about the video game goal; she can write her own message and pay an in-game cost (representative of the cost of writing an email), or choose from a list of suggested messages written by Medi (at no cost). At first Alice's own written messages to Bob will be much better than Medi's, but with a lot of practice on various (Alice,Bob,videogame) scenarios, can we train Medi to start providing valuable low-cost suggestions to Alice? Formally, we can view Alice, Bob, and Medi as solving an instance of a Decentralized POMDP <cit.>, ⟨ S,A_1,A_2,A_3,P,R,Ω_1,Ω_2,Ω_3, O,T,K⟩, where A_1 is Alice's action space (choosing a message from the assistant's presented options, or writing her own and paying the cost), A_2 is Bob's action space (moving the game sprite), and A_3 is Medi's action space (displaying lists of message options for Alice to choose from). The team's score in the game, R, is defined by Bob's score in the single-player video game minus the attentional cost of the messages Alice wrote. So, Medi does a good job if she conveys useful information from Alice to Bob, at low attentional cost to Alice. If we can develop good solutions to this sort of problem, numerous possibilities open up, including potentially saving Alice a lot of time on writing emails. But to push the science specifically toward better mediation tools, a natural next step would be to try experiments with with a symmetrized version of the game, where both Alice and Bob have goals and can take actions that affect both of their goals, and are assisted by an AI mediator Medi who can write suggested messages for both of them. Medi could sometimes send a message to Alice and Bob simultaneously, to create a “contract” between them if they both agree to it. Negotiable controls for powerful systems. In order to reduce the risk of conflict over the control of powerful AI systems or other systems, it would be prudent to develop formal, AI-compatible principles for sharing control of powerful processes. There is an interesting tension in this area, between fairness and successful negotiation. Suppose Alice and Bob are negotiating a deal to control a powerful system, and a mediator Medi is assisting in the negotiation. Medi may be able to finalize the deal by proposing a plan that's great for Alice but potentially terrible for Bob, in a way that Bob is unable to recognize in advance. (Betting is a simple example of this: a bet looks good to both parties, but can only carry positive expected value for one of them in reality.) This seems somewhat unfair to Bob. On the other hand, if Medi doesn't propose plans that look appealing from Bob's subjective perspective, Bob might walk away from the bargaining table. Hence, there is sometimes a fundamental trade-off between a deal looking good to both Alice and Bob, and the deal treating Alice and Bob equitably over time <cit.>. This trade-off can be seen in the behavior of reinforcement learning systems that are Pareto optimal for principals with different beliefs <cit.>. The only way to eliminate this trade-off is to eliminate the differences in beliefs between the principals. For that, perhaps progress in building mediation tools would be a useful start, or control techniques for powerful AI systems that can explicitly account for differences in beliefs among a committee of humans controlling a single system, such as in Dalrymple's “Open Agency Architecture” concept <cit.>. § CONCLUSION At this point, it is clear that AI technology can pose large-scale risks to humanity, including acute harms to individuals, large-scale harms to society, and even human extinction. Problematically, there may be no single accountable party or institution that primarily qualifies as blameworthy for such harms (Type 1). Even when there is a single accountable institution, there are several types of misunderstandings and intentions that could lead it to harmful outcomes (Types 2-6). These risk types include AI impacts that are bigger than expected, worse than expected, willfully accepted side effects of other goals, or intentional weaponization by criminals or states. For all of these risks, a combination of technical, social, and legal solutions are needed to achieve public safety.
http://arxiv.org/abs/2306.08865v1
20230615052746
One-Shot Learning of Visual Path Navigation for Autonomous Vehicles
[ "Zhongying CuiZhu", "Francois Charette", "Amin Ghafourian", "Debo Shi", "Matthew Cui", "Anjali Krishnamachar", "Iman Soltani" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Med-MMHL: A Multi-Modal Dataset for Detecting Human- and LLM-Generated Misinformation in the Medical Domain Chang-Tien Lu Received: date / Accepted: date =========================================================================================================== Autonomous driving presents many challenges due to the large number of scenarios the autonomous vehicle (AV) may encounter. End-to-end deep learning models are comparatively simplistic models that can handle a broad set of scenarios. However, end-to-end models require large amounts of diverse data to perform well. This paper presents a novel deep neural network that performs image-to-steering path navigation that helps with the data problem by adding one-shot learning to the system. Presented with a previously unseen path, the vehicle can drive the path autonomously after being shown the path once and without model retraining. In fact, the full path is not needed and images of the road junctions is sufficient. In-vehicle testing and offline testing are used to verify the performance of the proposed navigation and to compare different candidate architectures. § INTRODUCTION Recent advances in machine learning result in high performance on tasks like classification, but at the cost of requiring large amounts of training data <cit.> <cit.>. The data problem is especially important in the autonomous driving domain because the outputs of these models are used for safety critical applications and therefore must have high accuracy and cover a wide range of cases. This means that autonomous driving models may require thousands or even millions of annotated data examples, which can be very difficult to acquire and time consuming to annotate and pre-process <cit.>. Other problems include the difficulty of collecting abundant data for edge cases that occur infrequently, and then balancing the dataset so that the model performs well for both edge cases and normal scenarios. Futhermore, once a model is trained, it is difficult for the model to adapt to new scenarios without offline retraining. We attempt to address these data problems with a novel approach that combines an end-to-end autonomous driving system with the adaptability of one-shot learning. End-to-end models are a promising approach to autonomous driving and offer a comparatively simplistic design to conventional systems <cit.>. With the success of NVIDIA’s PilotNet in 2016, a competitive image-based end-to-end steering system, interest has increased further for this approach <cit.>, <cit.>, <cit.>. However performance of end-to-end systems, like other deep learning models, is predicated on large amounts of diverse data. Another drawback is that end-to-end networks are a black box and therefore extremely difficult to diagnose what prompts inference failures. Although explainability can be important for safety critical applications, this work values the simplistic design and portability of the end-to-end steering system to an autonomous vehicle. Still, the hierarchical multi-task network <cit.> that is used as the basis of our autonomous driving network is modular by design and allows for some interpretability. Few-shot learning is a facet of meta-learning, or “learning how to learn”. Under this framework, models can learn new tasks or adapt to new environments rapidly with a few training examples <cit.>. At one extreme, one-shot learning models can learn from just one example. Few-shot learning has been applied to several tasks ranging from classification <cit.>, <cit.>, <cit.> to reinforcement learning <cit.>, <cit.>. Few-shot learning has been implemented as part of robot navigation to change color features of a target <cit.>, and for road object detection <cit.>. However, to the best of the authors' knowledge, no previous work uses few-shot learning for image recognition as part of an autonomous driving system. In this paper, we develop a novel end-to-end deep learning architecture with one-shot learning to accomplish visual path navigation for autonomous driving. The main contributions of this work are: * We develop an end-to-end autonomous driving architecture that can learn navigation tasks in one-shot. Various architecture designs were considered and hyperparameter tuned to result in our proposed model. After a user drives the vehicle through a new path once, the vehicle can replicate the path autonomously. In fact, with our setup, a full drive through the new path is not needed and reference images of the junction turns are enough. Learning new paths do not affect the system's performance on previously trained paths. * We relax the need for large amounts of training data compared to classifier-based baselines and the hierarchical multi-task network, which our network builds upon <cit.>. * We implement and test the proposed autonomous driving system and one-shot learning in real-time with a mini-Autonomous Vehicle (AV) in both indoor and outdoor settings. The mini-AV can also collect and automatically label training and testing data. * We build a custom metric to compare the performance between candidate model. The proposed metric combines the temporal and discrete aspects of the task and generates a combined score. It improves the descriptiveness of metrics such as simple accuracy statistics or confusion matrices for this task. As this work is a proof of concept, we use a simplistic end-to-end driving model and limited environments for training and testing. But there are still many applications of our system. For example, because our model only requires images of junctions where the vehicle turns, it can navigate new paths by downloading images of intersections from online maps. Furthermore, with a different driving network, this approach could be generalizable to more complicated paths. § RELATED WORK §.§ One-Shot Learning There are various approaches to one-shot learning. Many approaches use memory and metric-based techniques <cit.>, <cit.>, <cit.> <cit.> developed for image classification tasks or language applications. In memory-augmented neural networks <cit.>, memories are stored externally and compared against the test image through cosine similarity. In Siamese networks <cit.>, there are two CNN branches that share weights. One branch processes the test image and the other processes the reference image, and the resulting vectors are compared using a component-wise weighted L1 metric for image recognition. The relational network <cit.> uses the same basic Siamese architecture, but replaces the L1 metric with a simple concatenation of the two feature vectors followed by a CNN. In another method built on Siamese networks, matching networks <cit.> use separately learned networks to embed the test and reference images. Each of these embedding networks is based on an LSTM that generates the output embedding in the context of the other classes. Our proposed model draws inspiration from these models and uses the Siamese network to produce feature vectors, of which reference image features are stored in memory. A subsequent module compares two image sequences instead of two images. §.§ Hierarchical Multi-Task DNN <cit.> The foundations for this work are based on development of the novel hierarchical multi-task deep neural network (DNN) in <cit.>. The multi-task model is based in part on the concept of Multi-Task Learning in <cit.>, <cit.>, <cit.>, <cit.> and proposes a modular end-to-end architecture that contains two levels of hierarchy: a high-level manager classifier network and a number of worker task models. This is a generic architecture that can be used on any application that can be partitioned into sub-tasks. The manager classifier selects which task to perform and the output from that task becomes the final model output. For an autonomous driving application, each task represents a driving action such as "turn left" or "turn right", and outputs steering commands that complete the appropriate action. The task models are implemented as regression networks. This overall architecture is the basis for the steering control system of the small autonomous driving platform used in this research effort. §.§ Route Following There are several approaches to route following for robots. One approach learns the map from a training run and the map is used for localization in subsequent runs <cit.>. Another approach uses odometry data with periodic orientation correction using visual input <cit.>. Still another considers using teaching runs from another platform and solving the viewpoint mismatch problem <cit.>. Our problem differs from the aforementioned because we solve an autonomous driving problem, where vehicles drive on well-defined roads, as opposed to open-world path navigation for robots. As such, it is difficult to directly compare the path following capability between the two. Also, our approach uses purely visual inputs and need only images of road junctions instead of a full training drive. § FORMULATION OF ONE-SHOT PROBLEM §.§ Mini-Autonomous Vehicle Platform One contribution of this paper is to set up a real-time mini-AV platform, on which data is collected and the driving algorithm is implemented and tested <cit.>. This hardware platform can simultaneously record the steering and throttle commands from the remote controller (RC), and the stereo images from the camera at a rate of approximately 30 frames-per-second, which are automatically synchronized generating automatically annotated data. A 1/6 scale Traxxas xMaxx vehicle shown in Figure <ref> was used. The vehicle was modified to operate autonomously with a stereo camera, an Arduino micro-controller, an Intel processor with GTX-970 graphics card with 4 GB of memory, and a Vedder electronic speed controller. A 12 cells battery pack was mounted to provide power, and an external monitor was installed. The vehicle has three driving modes: manual, recording and autonomous. In manual mode, the vehicle behaves like a standard RC car and the user controls the throttle speed and steering angle. In recording mode, the vehicle is still user-controlled but the stereo image data, and the corresponding steering angle and throttle speed are recorded frame-by-frame <cit.>. In autonomous mode, the vehicle is fully autonomous and the Intel processor controls the throttle speed and steering angle based on real-time camera input. The vehicle runs at 15 frames-per-second in our proposed navigation system. §.§ One-shot Learning Formulation The classical formulation for the few-shot learning classification problem is as follows. 𝒞 is the set of classes for all y_i such that y_i ∈𝒞. For a set of novel classes 𝒞_novel to be classified, a set of base classes 𝒞_base is used for training, where 𝒞_novel∩𝒞_base=∅ and 𝒞_novel∪𝒞_base=𝒞. The training dataset is 𝒟={(𝐱_i,y_i)}, 𝐱_i∈ℝ^d, y_i∈𝒞_base. The goal is to train a model with training data from the base classes so that the model can generalize well on tasks sampled from the novel classes. In the evaluation phase of K-shot learning, a series of tasks {𝒯} are evaluated. Each task is split into a support set 𝒮 = {(𝐱_i, y_i)}^K× N_i=1, y_i∈𝒞^𝒯 and a query set 𝒬 = (𝐱_i, y_i)^K× Q_i=1, y_i ∈𝒞^𝒯, where N is a subset of 𝒞_novel. K=1 in the case of one-shot learning and N=Q for our proposed one-shot setup. An evaluation task for the proposed setup is a novel path driven by the mini-AV. §.§ Navigation as a One-shot Problem The hierarchical multi-task DNN <cit.> is the basis model for our proposed autonomous driving architecture. Following their formulation, driving is partitioned into "left", "right" and "straight" tasks or sections. An example of this partition is shown in Figure <ref>. Each task is implemented with a regression network that outputs a steering command based on input images. The manager classifier network (MCN) that selects which task to perform is implemented as a Siamese network to allow one-shot learning. The MCN is a multi-input binary classifier where its input is a stream of images, and its binary output signals when a task is complete. The next task is loaded from a memory queue and the process repeats as shown in Figure <ref> and discussed in detail in Section <ref>. Each task is stored as a direction and a sequence of 10 images taken from the end of the task, which is compared against the input image stream in a Siamese network. The order of the memory queue is the order of tasks that the mini-AV encounters when driving from course start to end as shown in Figure <ref>. The initial state is assumed to be the "straight" task. The mini-AV stops when there are no more tasks in the memory queue. To train the network, multiple manual runs of the same course are recorded, one of which provides the reference images while the rest serve as the image input stream. Different combinations of the reference and input images are generated to produce positive and negative examples for the MCN to learn how to recognize the end of a section. Once the network is trained, the weights in the model are frozen for one-shot learning. One-shot learning for path recognition is tested as follows on the mini-AV testing platform: * Learning: The mini-AV learns a particular path by being manually driven in recording mode once. The path is automatically annotated into different sections, with the correct task label and the last 10 images of the associated section saved in order in a memory queue. * Inference: The mini-AV is placed in autonomous mode with the memory queue loaded. The mini-AV drives autonomously to repeat the learned path. § MODEL AND DATA §.§ Model Architecture The Manager Classifier Network (MCN) is a multi-input binary classifier. The MCN takes as input a stream of images (such as a real-time camera stream) and a reference image sequence and determines if the two match. For this driving application, the reference image sequence and input image sequence are a sequence of 10 images. The MCN is comprised of the feature extractor followed by the memory layer and finally the differentiator. The overall structure is shown in Figure <ref>. Two identical feature extractor networks are arranged in a Siamese fashion where a test image is input into one branch and a reference image is input into the other branch. The corresponding weights in the two feature extractors are identical. The feature extractor is composed of a convolutional neural network (CNN) followed by a dense layer. The CNN is composed of four consecutive modules, each of which is a 3x3 convolution with 64 filters followed by a ReLU activation, 2x2 max-pooling, and batch normalization. This Conv-4 configuration is a competitive and simple backbone often used in few-shot learning <cit.> <cit.> <cit.>. A dense layer compresses the CNN output into a 50-dimensional embedding of the image. For a sequence of input images, each image is run through the feature extractor network and the overall output is a tensor of the concatenated feature vectors. The memory layer compares the two feature vector sequences using a similarity metric. The similarity can be measured with standard techniques such as cosine similarity <cit.>, component-wise L1 <cit.>, and L2 distances or with more sophisticated metrics <cit.> <cit.>, and is a design choice. The various similarity metrics are then concatenated into a tensor and input to the differentiator. The differentiator takes the inputs from the memory layer and produces a binary output signaling whether the reference and test images match. The differentiator can be implemented as several fully connected layers, RNN, LSTM, or transformers <cit.> <cit.> <cit.>. Through experiments and analysis, an LSTM with 10 hidden nodes performs best. The loss function is a standard binary cross-entropy loss. §.§ Data Collection One-shot learning requires the training data to consist of diverse tasks that relate to the target application. Training data for path navigation is collected as the user manually drives the vehicle through a course in recording mode. For convenience, we trained and tested in an indoor office setting consisting of two buildings, building A and building B. We recorded training data from building B and tested the vehicle in building A. Both buildings contain clusters of desks, chairs and cabinets, but are set up differently to present a generalization challenge to our network. For training, we recorded 13 courses in total with each course consisting of 8-15 turning sections. We recorded 4 runs per course, 3 of which are used in training and one for testing. Mirrored versions of each course augment the data for a total of 26 distinct courses. For one-shot testing, we recorded 2 courses with 2 runs per course, so that we can test in real-time as well as in simulation. A small outdoor dataset was also collected to demonstrate the model in another environment. Due to limitations, we recorded 2 training courses with 3 runs per course and 1 test course with 2 runs per course. The rest of the data processing and model hyperparameter selection was tuned for the bigger indoor dataset then applied to the outdoor dataset. Further details can be found in Section <ref>. §.§ Data Pre-processing The raw stereo images are resized from 672× 376 to 84× 47 with each image having 3 color channels for a total of 6 color channels per stereo image. The raw steering values are linearly mapped to normalized steering values between -100, a full left turn, to 100, a full right turn. Each image frame is labeled as belonging to a ‘left’, ‘right’ or ‘straight’ section based on the normalized steering values. Further information about this procedure is available in the appendix. §.§ Data Pairing for Training Each training example is composed of a pair of reference and test sequences, a consecutive sequence of ten frames each. From 13 indoor training courses, a total of ~300,000 training examples are created, of which 15% is randomly set aside as validation data. Similarly, 2 outdoor courses result in ~45,000 training examples. Indoor and outdoor models are trained separately. A training batch size of 8 is used and each batch contains randomly shuffled positive and negative examples from the same section of the same course. The positive and negative examples are constructed as follows. Positive examples are formed by pairs of reference and test sequences from the same section end. Negative examples are pairs where the reference sequence is from the section end but the test sequence is not. The section end is the last 15 frames of a section. A training buffer region of 10 frames before the section end is set aside to allow for slight differences during the manual drives. The number of negative examples is chosen to be double the number of positive examples because there is more diversity in the negative examples. § RESULTS After optimizing over hyperparameters and several architectures, we verified the proposed MCN model performance through both one-shot simulation testing as well as real world testing. The models were trained on one Nvidia Quadro RTX 6000 GPU for 10 epochs which took about 12 hours. §.§ Performance Metric We developed a custom performance metric to evaluate the one-shot learning model. In our navigation approach, a false positive that triggers a task change and false negative that misses the task change will both cause failures. Conversely, a section passes only if there are no early triggers, and there is a trigger at the section end. The prediction can achieve a 99% accuracy and still have false triggers. Thus, the raw accuracy is not informative enough and a new performance metric was created. This new metric evaluates the prediction section by section, and combines them into one numerical score. The section body accuracy of section j in Equation <ref> is based on the true negative rate or specificity, and is weighted so that an error early in the section body has more severe consequences. startframe_j is the first frame of the section body for section j and n_j is the last frame of the section body. {section end_j} is the set of frames in the j-th section end. p_i is the predicted output of the ith frame sequence. section body accuracy_j = ∑_i = start frame_j^n_j(n_j-i)·1[p_i < 0.5]/∑_i = start frame_j^n_j(n_j-i) The jth section end accuracy in Equation <ref> is also the true positive rate or sensitivity of section j. section end accuracy_j = ∑_i ∈{section end_j}1[p_i> 0.5]/∑_i ∈{section end_j}1 section end accuracy_j = ∑_i ∈{section end_j}1[p_i> 0.5]/ number of frames in section end_j The section body accuracy and the section end accuracy are each calculated per section within the test run, and the two accuracies are combined using a geometric mean into a per-section percentage accuracy. Next, a geometric mean is taken across all sections’ percentage accuracies. The overall score is penalized for error by halving the score for each section with an error (in section body or section end or both). That way, it is easy to spot if a model fails in at least one of the sections of the test run, since the performance score will be < 0.5. It is possible for all sections to pass and still have a performance score < 0.5, however this means the pass is low quality, and slight fluctuations may cause the model to fail. Thus, we aim to achieve a high performance score of above 0.5. §.§ Data Augmentation One-shot learning applications are trained on a small amount of data, so augmenting the available data can help the model generalize better. In this paper, various augmentation methods such as mirroring, cropping, and rotating training videos were tested. Each augmented course is considered a separate course from the original counterpart. In addition, doubling the amount of near-negative training examples was also tested, where a near-negative example is defined as a training example where the reference images are in the section end and the test images are within 15 frames of the section end. Mirroring and adding near-negative examples were found to help with the model performance whereas rotation caused the model training to diverge, the reason for which may be that the rotated images were too different from the original. §.§ Model Studies We conducted studies by implementing the feature extractor with 2D and 3D CNNs. For the 2D CNN models, we investigated the effects of number of layers, filter size, and number of filters. We confirmed the commonly used Conv-4 architecture with 4 layers, 3×3 filter size and 64 filters performs the best with our one-shot task with real-time considerations <cit.>. We also tried an alternate 3D convolutional architecture. 3D convolutions incorporate the temporal aspect and performs well in video applications <cit.>. We modeled our architecture after the original 3D convolutions paper and also took cues of parameter selection from their experiments. We implemented several metrics for the memory module. The differentiator was modified to densely connected layers. The models trained but did not generalize well to the one-shot tests. This may indicate that we need additional data for the larger 3D CNNs, and that perhaps compressing images in the time dimension adds an additional challenge when trying to compare two short sequences of images. The memory layer compares two feature vector sequences using a similarity metric. We found that a feature vector length of 50 gives the best performing model. Out of the various metrics we tried, a concatenation of L1 and L2 distances applied to each pair of feature vectors, one from reference and one from input (similar to a covariance matrix), worked best. L2 has worked well in other few-shot models <cit.> so an extended version of L2 working well with our sequential data is reasonable. Probability based distances may further improve our model and remain as future work <cit.> <cit.>. The differentiator is the last component of our network and we found LSTM to work best among dense layers, RNN, LSTM, and transformers. For the transformer network, instead of using a positional encoding, we used a time convolution at the beginning to encode temporal relationships <cit.>. We varied the embedding activation, time convolutional kernel length, and number of heads. The best tuned transformer network results were comparable to the LSTM network results but was not consistent across the two one-shot tests. Still, the CNN+transformer architecture shows promise; perhaps a full transformer model including vision transformers could improve results in the future. A naive classifier baseline is also evaluated. The Conv-4 based classifier learns each section end as a separate class. To learn a new course at test time, the last layer of the classifier is replaced with cosine similarity to compare with the reference section end images. The resulting scores are lower than the one-shot learning based models and demonstrate the effectiveness of one-shot learning when sparse data is available. Tested architectures and their performance in simulated one-shot tests are summarized in Table <ref>. The simulation results of the CNN+LSTM model for test course 1 are shown in Figure <ref>. The x-axis is the frame number of the input image stream, and the y-axis is the prediction results. The blue line predicts whether the input image sequence matches the reference. There is a test buffer of 15 frames before the section end. It can be seen that each section end is appropriately triggered. We also verify that the simulation results translate to real-time results by letting the mini-AV autonomously drive through the two one-shot test courses. §.§ Data Results The Hierarchical network in <cit.> required more runs of a course compared to our one-shot version to learn path navigation as shown in Table <ref>. This, along with the poor performance of the naive classifier baseline supports the hypothesis that one-shot learning models help in training data reduction. Our model has the additional capability of learning new courses with one example run and without further training. This framework does not “forget” previous training courses as demonstrated by testing the model with a held out run of select training courses shown in Table <ref>. §.§ Ablation Studies We explore the contributions of each component of our proposed architecture. Ablation studies were performed by removing the feature extractor, memory, and differentiator components termed remFE, remMEM, and remDIF respectively, as shown in Table <ref>. In the remFE model, the images are fed directly to the memory layer. After trying multiple similarity metrics, the training failed. In particular, the model diverges and predict all examples as False. This shows the importance of the feature extractor in extracting a meaningful representation of the images for comparison. In the remMEM model, the feature vectors are input into the LSTM directly. This version performed the best out of the ablation models but still does not perform as well as a more custom similarity metric. In the remDIF model, the output from the memory layer is used as the final output. As such, we adapted the model to use a cosine similarity metric with threshold to give a binary output. During training, it had difficulty identifying the positive examples but still trained to convergence. This shows the importance of the differentiator in identifying whether a stream of images matched. §.§ Outdoor Course An outdoor course was set up to show the generalizability of the one-shot architecture to other driving environments. We built a course with cones on an outdoor driveway with paths navigating the cones as shown in Figure <ref>. We choose this setup because we wanted to shield the mini-AV from oncoming traffic while still processing images from road-like conditions. Because of these limitations, we could only collect data from one outdoor driveway; however, we set up the training courses and one-shot test course such that all turns but one in the one-shot test course are previously unseen. The result of the outdoor one-shot test course yields a passing score as in Table <ref>. This means that the one-shot test course passes in simulation, however the metric output is lower than the indoor environment. This is likely due to less data collected for training in the outdoor environment. § CONCLUSION This work has demonstrated that one-shot learning is a viable technology when few examples of a scenario are available or quick adaptation to a new environment is desired for autonomous driving applications. We developed an autonomous end-to-end DNN that can one-shot learn path navigation and tested the system online and offline in an indoor office setting and an outdoor driveway. While at this stage the problem is fairly simplified and the model is evaluated in a rather limited capacity, it serves as a proof of concept for the proposed layout. In a forthcoming paper, we will build on the current analysis and incorporate additional scene diversity and more sophisticated driving scenarios, as well as alternative architectures for the overall hierarchical one or few-shot model. The authors would like to sincerely thank Nikita Jaipuria and Vidya N. Murali for fruitful discussions and for reviewing the submission draft. We would also like to thank Nahid Pervez and Jinesh Jain for their encouragement; and Raju Nallapa for his insightful advice and continuous support. This work was supported by Ford Motor Company. IEEEtranN § VEHICLE SCREEN EXAMPLE A snapshot of the screen of the vehicle while driving is shown in Figure <ref>. The Infer Sigmoid output is the binary output of the model and decides the path to navigate. This was captured at the end of section 3 and the high Infer_Sigmoid output indicates the MCN network has correctly recognized the end of section. A recording of the screen of the mini-AV driving autonomously after one-shot learning is shown in the video presentation. § DATA RELATED §.§ Data Pre-processing There are several automated pre-processing steps performed on the raw training data. The stereo images are resized from 672x376 to 84x47 and have 3 color channels per image for a total of 6 channels. The raw steering values are linearly mapped to normalized steering values between -100 (a full left turn) to 100 (a full right turn). Each image frame is labeled as belonging to a ‘left’, ‘right’ or ‘straight’ section according to the following algorithm: each frame is by default labeled ‘straight’, the turning frames are identified as a consecutive sequence of frames with steering values above 25 or below -25 and with a peak steering value of at least 90 or -90. The frames are then split into sections based on their labels. An example of this splitting algorithm is shown in Figure <ref>. §.§ Data Pairing for Training Each training example is composed of a pair of reference and test frames, a consecutive sequence of ten frames each. There is a total of about 300,000 training examples. During training, 15% is set aside as validation data. The training data has a batch size of 8, where each batch contains shuffled positive and negative examples from the same section of the same course. The positive and negative examples are constructed in the following way. Positive examples are pairs of reference and test frames where the DNN should predict 1, i.e. both the reference and test frames are from the section end. Negative examples are pairs of reference and test frames where the DNN should predict 0, i.e. the reference frames are from the section end but the test frames are not. The reference frames are constructed in the same way for both positive and negative examples. The reference frames are any consecutive sequence of ten frames from the section end (the section end consists of the last fifteen frames of a section). The positive test examples are constructed as follows: for a particular reference frame, the test frame is any consecutive sequence of ten frames from the same section end of the same course but potentially different runs. All valid combinations of reference and test frames are included in the training examples. Negative test examples are randomly sampled, and the test frame and reference frame are again from the same section of the same course but potentially different runs. The test frame is a consecutive sequence of ten frames randomly sampled from the beginning of the section up to 10 frames before the section end. We have a 10 frame training buffer region to allow for slight differences during the manual drives. The number of negative examples is chosen to be double the number of total positive examples because there is more diversity in the negative examples. This ratio is satisfactory in giving the model a somewhat balanced training data to identify the section end. §.§ Data Augmentation One-shot learning applications are trained on a small amount of data, so augmenting the available data to help the model generalize is particularly important. We tried various augmentation methods such as mirroring, cropping, and rotating training videos. Each augmented course is considered a separate course from the original counterpart. We also tested doubling the amount of near-negative training examples in each epoch, where a near-negative example is defined as a training example where the reference images are in the section end and the test images are within 15 frames of the section end. We evaluate the effects of these augmentation methods by training 10 models for each augmented dataset and averaging their test 1 scores with the results shown in Table <ref>. We set the random seed for each model so that the weight initializations are the same and allow for better comparisons across methods. Unfortunately, all models trained on rotated images diverged during training. We suspect this may be due to the rotation angle being too large, and the resulting features from those rotated images being too different from the original ones. These data augmentation experiments were performed before fully optimizing our model so the model scores are lower. However, these results are averaged over 10 trainings each, so they are still meaningful and indicate that adding mirrored and near-negative data significantly increases the performance of our model. Other methods to increase training data such as additional augmentation methods or manually collecting more data may further improve our model's performance. § ARCHITECTURE SEARCH We find the best hyperparameters and architecture styles for the manager classifier network. §.§ CNN Studies We conducted studies on modifying the CNN network (feature extractor) by changing the input image size, number of layers, filter size, and number of filters. §.§.§ Image Size We investigated the effect of image size on the output. We compared input image sizes of 47x84 and 94x168. The smaller image size produced comparable one shot test results as the larger image size, but required only 1/2 the network size and 1/3 of the training time. We concluded that the smaller image size is the best option to use with our architecture. §.§.§ Number of Layers We experiment with CNNs composed of 4, 5, and 6 layers respectively. We find that 4 layers of CNN performs the best. For 6 layers, the validation accuracy is low and the validation and training loss are high, indicating that the model is not fitting the training or validation data well. This may be the case because in each layer, feature vector length is reduced by a max pooling layer, thus after 6 layers, the output feature vector length is  150 compared to  1500 after 4 layers. This reduction in feature length limits the amount of information that can be captured about the image and the distance layer performs poorly as a result. §.§.§ Filter Size We next experiment with CNNs of varying filter size: 3x3, 5x5, or 7x7. During training and validation, they all perform well. However, the 3x3 filter size performs the best in the one shot tests. §.§.§ Number of Filters We next experiment with CNNs of varying number of filters of 32, 64, and 128. During training and validation, they all perform well. During the one-shot tests, CNN with 32 and 64 filters perform best in the first test while CNN with 128 filters performs best in the second test. The CNN with 64 filters performs consistently well on both tests, close to the best performance accuracy. §.§.§ Transfer Learning and CNN Summary We also considered transfer learning with a known network such as VGG16. However, the transfer learning models, where the last 1-3 layers were finetuned in turn with our collected data did not converge during training. We hypothesize that our collected data may be too simple for sophisticated deep networks. It is also possible that these networks do not work well with the rest of the model architecture such as in a Siamese network or with a downstream LSTM. Further investigation is required to pinpoint the reason. The previous experiments indicate the best CNN design is 4 layers of 3x3 filters and with filter size of 64 and with 47x84 input image size. §.§ Memory Layer Studies §.§.§ Distance & Feature Length Study The test and teach features outputted from the feature extractor gets compared in the memory layer. We investigated the effects of feature length and distances on the final output. For feature length, we tried lengths of 25, 50 and 100 nodes. For each of these feature lengths, the following distances were tested with the one shot test in test course 1: * Difference vector * Absolute difference vector * Square difference vector * Absolute and Square difference vectors concatenated * L1 * L2 * Cosine Similarity * L1 and L2 concatenated * L1 and Cosine Similarity concatenated * L2 and Cosine Similarity concatenated * L1, L2 and Cosine Similarity concatenated * L1 Cross Sequence * L2 Cross Sequence * Cosine Similarity Cross Sequence * L1 and L2 Cross Sequence concatenated * L1 and Cosine Similarity Cross Sequence concatenated * L2 and Cosine Similarity Cross Sequence concatenated * L1, L2 and Cosine Similarity Cross Sequence concatenated The inputs are an image sequence, the difference between Ln and Ln cross is that Ln takes that distance between each paired test and teach frames in the sequence, whereas Ln cross takes that distance between every pair of test and teach frames. The cross distances are more effective likely because it allows for more possibilities in matching between test and teach frames. The results show that memory length of 50 nodes produces the best result for the one shot test. Cosine Similarity, a commonly used distance, does not seem to work well for this application. This is likely because cosine similarity measures the angle between two vectors but not the length of the vectors, thus two vectors can be very similar under cosine similarity but very different spatially if one vector norm is much greater than the other. L1 cross and L2 cross distances can work but do not yield the best results. L1 cross and L2 cross concatenated yielded the best results overall. This makes sense since the "cross sequence" part inserts some additional time dependency within the memory layer. L1 cross and L2 cross, along with square difference vectors produced separate models that performed well on one shot test 1. However, when these 3 distances were concatenated, the resulting models performed poorly on one shot test 1. Therefore, the effects of these distances do not seem additive. L1 cross and L2 cross was chosen over square difference as the final metric because both L1 cross and L2 cross by themselves also performed well, so we postulate that these two together is a more stable distance. Other distances may be investigated to improve our results as future work. § OUTDOOR DRIVING ENVIRONMENT We have shown that the mini-AV can learn a path in one-shot in an indoor environment. We also collected data from an outdoor course to show the generalizability of this one-shot architecture. §.§ Course We built a course with cones on an outdoor driveway. We choose this location because we wanted to make sure the mini-AV is shielded from oncoming traffic while still processing images from road-like conditions. Because of these limitations, we could only collect data from one outdoor driveway, however we set up the training courses (Figure <ref>) and one-shot test course (Figure <ref>) such that all but one turn in the one-shot test course is previously unseen. The training courses are again collected with 3 runs per course. The 2 courses contributed a total of 5837 training batches. § PERFORMANCE METRIC We developed a novel performance metric to evaluate the One-Shot Learning model with regards to its overall accuracy for predictions during a One-Shot Test as well as the overall success of the RC car steering system. For offline testing, a simulated One-Shot test was used and evaluated. The evaluation One-Shot Learning test is performed using data from a course that the model has never seen before (for indoors, a course from Building A). Two files are required for testing. The first file serves as the "One-Shot" experience, and the images from this file are used as the model's "memory." The second file represents a second pass through the course, and this time the model uses its "memory" to compare the images from the first file to the incoming images in the second file. For each frame in the second file, the model predicts whether the frame is a match for the corresponding end-of-section memory frames. An overall performance metric is needed because simple measures such as accuracy do not tell the full story. For example in Figure <ref>, the overall accuracy in the section bodies is 97%, but it is obvious that in the body of sect04, there is a False trigger. In our setup, this would cause the mini-AV to prematurely make a turn and fail the test path. This catastrophic failure is not reflected in the accuracy thus we design the performance metric to output a low score for every failure in the section body and/or section end. The following sections covers the performance metric components in detail. §.§ Metric Breakdown The performance metric considers an evaluation run as a whole, as well as on a section-by-section basis. It is composed of several sub-metrics: Section Body Accuracy, Section End Accuracy, Failure Boolean, and Trigger Success Boolean. These metrics can be looked at cumulatively or individually for specific debugging of a particular section within the run. §.§ Section Body Accuracy The section body accuracy is a measure that reflects the true negative rate or specificity of the section body. Since the model is expected to predict 0 during the section body, any errors here are false positives. We consider each frame in the section body that has a prediction less than 0.5 to be counted as correct predictions. For each of these correct predictions, we calculate the "distance", or number of frames between this frame and the termination of the section body, and we add this distance to an overall summation. This overall summation is then divided by the cumulative sum of the distances between each frame and the termination of the section body for every single frame in the section body. The distance measurements from frames with erroneous predictions are not included in the numerator sum, and therefore help create an overall "accuracy" measure. Since the model is expected to predict "1" during each section end, we consider false positives that are closer to be beginning of the section body to be worse errors than those nearer to the section end. Therefore, the distance between each frame and the last frame of the section body was chosen as a way to weight the relevance of the predictions. This causes frames with erroneous predictions that are farther away from the section end to detract more heavily from the accuracy percentage. section body accuracy = ∑_i = start frame_j^n_j(n_j-i)·1[p_i < 0.5]/∑_i = start frame_j^n_j(n_j-i) §.§ Section End Accuracy The section end accuracy is used to assess the performance of the model during each section end. Since the model is expected to predict '1' for each frame in the section end, this metric reflects the true positive rate or sensitivity of the section end. We consider each frame in the section end with a prediction greater than 0.5 to be a correct prediction. We count the number of correct predictions within the section end and then divide by the total number of frames in the section end to produce an accuracy measure as seen in Figure <ref>. section end accuracy = ∑_i ∈{section end}1[p_i> 0.5]/total number of frames in section end §.§ Failure Boolean The Failure Boolean is a binary measure of system failure, based on the model's predictions throughout each section body. Failure is defined as a false positive prediction that extends for a given length of frames. In these experiments, the threshold prediction value of 0.5 is used, as well as the threshold length value of 5 frames. The threshold length value is used everywhere in the test run except for checking false positive prediction stretches that extend directly into the section ends. For these cases, a threshold length value of 10 frames is used. The algorithm tracks false positive predictions throughout each section body and flags and records sequential false positive predictions. This information is used to determine if the run meets the failure criteria of containing more than 5 subsequent frames with false positive predictions. §.§ Trigger Success Boolean The Trigger Success Boolean is a binary measure of if the system is expected to "trigger" and switch steering networks correctly, based on its predictions during each section end. To complement the previous definition of failure in the section body, success in the section end is defined as a prediction above a given threshold for a given length of frames. Here, we use the threshold value of 0.5 and the threshold length value of 5 frames. To include the possibility that the systems triggers a few frames earlier than the section end, which has no overall impact in the system performance, a test buffer region is set before the section end where a trigger is valid in as well. This buffer region is equal to the length of the section end (in these experiments, it was 15 frames). §.§ Combined Metric As mentioned in the main body of this paper, the combined metric applies the geometric mean several times to the section body accuracy and section end accuracy. We also considered variations such as taking the arithmetic mean several times or in combination with the geometric mean, and using the specificity directly as the section body accuracy. By observing the results and cross-checking with the simulated output graphs, we found the proposed metric captured the quality of the predictions the best.
http://arxiv.org/abs/2306.02215v1
20230603235236
Distributed Rate Scaling in Large-Scale Service Systems
[ "Daan Rutten", "Martin Zubeldia", "Debankur Mukherjee" ]
math.OC
[ "math.OC", "math.PR", "68M20 (Primary) 68Q25, 68W15 (Secondary)", "C.2.4; C.4; G.1.6" ]
showonlyrefs equationsection theoremTheorem[section] corollary[theorem]Corollary conjecture[theorem]Conjecture assumption[theorem]Assumption proposition[theorem]Proposition lemma[theorem]Lemma remark[theorem]Remark claim[theorem]Claim claimproof 1]Daan RuttenEmail: mailto:[email protected]@gatech.edu, 2]Martin Zubeldia 1]Debankur Mukherjee [1]Georgia Institute of Technology [2]University of Minnesota Distributed Rate Scaling in Large-Scale Service Systems [ July 31, 2023 ======================================================= We consider a large-scale parallel-server system, where each server independently adjusts its processing speed in a decentralized manner. The objective is to minimize the overall cost, which comprises the average cost of maintaining the servers' processing speeds and a non-decreasing function of the tasks' sojourn times. The problem is compounded by the lack of knowledge of the task arrival rate and the absence of a centralized control or communication among the servers. We draw on ideas from stochastic approximation and present a novel rate scaling algorithm that ensures convergence of all server processing speeds to the globally asymptotically optimum value as the system size increases. Apart from the algorithm design, a key contribution of our approach lies in demonstrating how concepts from the stochastic approximation literature can be leveraged to effectively tackle learning problems in large-scale, distributed systems. En route, we also analyze the performance of a fully heterogeneous parallel-server system, where each server has a distinct processing speed, which might be of independent interest. § INTRODUCTION The escalating power consumption of data centers has become a pressing concern in recent years. If left unchecked, the electricity demand of data centers is predicted to grow up to 8% of the total U.S. electricity consumption by 2030 <cit.>. As a result, data center providers are continuously working towards maximizing energy efficiency in their servers by pushing the hardware's capabilities to their peak potential. However, to achieve significant additional improvements, there is a renewed interest in enhanced algorithm design in this domain <cit.>. One popular approach to improve power efficiency has been dynamic speed scaling <cit.>, in which processing speeds of servers are dynamically scaled to strike a delicate balance between the system's power consumption and the user-perceived quality of service (such as delay). The main contribution of the current paper is to propose a novel algorithm for speed scaling in the context of large-scale implementations and establish its asymptotic optimality. In modern chips, the power consumption exhibits a polynomial growth pattern in relation to the processing speed (typically cubic), while the processing time is clearly inversely related. These two features are the building blocks of the global system cost, which is a carefully crafted combination of the power consumption as well as the user-perceived sojourn time. The problem can be framed as an online optimization problem, requiring servers to determine their service rate in order to minimize a specific global objective function; see (<ref>). This problem is particularly relevant for large-scale data centers, which house tens of thousands of servers. Any centralized policy in such facilities is prohibitive due to the scale of operations and their implementation complexity. Therefore, any algorithm for this optimization problem must be scalable. Even more importantly, the global arrival rate of tasks is often unknown, especially to individual servers. Therefore, any algorithm for this optimization problem must dynamically learn and adapt to the environment. Motivated by the above, in this paper, we present a distributed, online solution to the aforementioned optimization problem. The proposed algorithm enables each server to autonomously converge towards its globally optimal service rate without requiring any explicit inter-server communication or knowledge of the incoming traffic intensity. §.§ Previous work There are several lines of works in the literature that are relevant to the problem under consideration. In the absence of stochastic arrivals, speed scaling in the single-server case has drawn considerable attention. Here, the predominant tool has been competitive analysis. Bansal et al. <cit.> considered the shortest remaining processing time (SRPT) service disciple, Wierman et al. <cit.> studied processor sharing systems and Albers et al. <cit.> analyzed parallel processors with deadline constraints. Moreover, Ata and Shneorson <cit.> considered the speed scaling problem when the inter-arrival times and processing times are exponentially distributed. The setup in the above works is the closest to our work in spirit, but it lacks the fundamental challenge of tackling dynamically evolving heterogeneous systems, which are inherently high-dimensional, under the effects of a load balancing algorithm. In the case of parallel server systems, an alternative approach for reducing system energy consumption is to dynamically scale the number of `active servers', or turning servers on and off, to avoid unnecessary idleness by the servers. There has been a large body of works in designing algorithms true to this approach <cit.>. It is worth mentioning that this line of works predominantly consider homogeneous servers. Here, the goal is to have the minimum number of servers active such that the performance, in terms of delay, is not degraded by congestion. The main challenge in this scenario is the fact that servers cannot be turned on instantaneously, so systems have to be careful to ensure that the active capacity is enough at all times. Finally, there are several works on parallel server systems with heterogeneous server rates. In the scheduling literature, the literature has analyzed scheduling tasks across different machines while also adapting to their service rates <cit.>. There are a few papers closer to our setup, which consider a joint dispatching and speed scaling at a fluid level, where the fraction of tasks and the speed of the servers are chosen to minimize a tractable cost function <cit.>. The authors develop decentralized algorithms which minimize the cost, either by writing the optimization as a Lyapunov function plus a penalty and then moving along the drift (<cit.>), by employing a Lagrange method and binary search approach (<cit.>), or by sharing local information with neighbors to solve the convex optimization problem in a decentralized way (<cit.>). While these papers consider dynamic rate scaling, the fact that they customize and control the load balancing policy greatly simplifies both the objective function and the analysis. §.§ Our contributions In this paper, we design and analyze a simple, distributed algorithm that runs decentralized at each server and updates the service rate based only on the idle times of the local server. In the relevant asymptotic regime, we prove that the cost of the server rates under our algorithm converge to the globally optimal cost. More specifically, we make contributions on three fronts: (a) Heterogeneous service rates. We mentioned before that the objective function involves the (steady-state expected) sojourn time of tasks. Therefore, to evaluate the system performance for any speed profile of the system, or to even formulate the optimization problem, it is crucial to better understand the sojourn time of tasks in fully heterogeneous large-scale systems, i.e., where all service rates are distinct. To the best of our knowledge, in this setup, expressions for the average queue length or task sojourn times under policies like Join-Idle-Queue (JIQ) are unknown till date. More fundamentally, we have reasons to believe that simple expressions for these statistics may not even exist. To tackle this problem, we prove that the steady state expected idle time of different servers becomes equal in the many-server limit, even for a heterogeneous system. To obtain this result, we use an auxiliary measure-valued process to bridge the gap between the intractable system state and a more manageable deterministic approximation. Although we do not have a closed form expression, the characterization of the idle time allows us to describe the trajectories of the service rates and compute the expected sojourn time in the limit. En route, we show a concentration result on the fraction of busy servers in the system. (b) Local gradient descent of global cost. A popular approach in continuous, convex optimization is gradient descent due to its simple formulation and convergence guarantees. Here, the decision variable is moved incrementally in the negative gradient direction, which, for sufficiently small steps, guarantees that the objective decreases in every step. At this point, the literature has explored numerous variants of gradient descent. For example, of particular relevance to the current scenario, when only noisy estimates of the gradient are available <cit.>. However, a gradient descent-type approach cannot directly be applied to the current setup for several reasons. First, as discussed before, a closed form expression for the expected sojourn time in a heterogeneous system does not exist. As a result, the gradient of the global cost function cannot be evaluated by an algorithm. Moreover, we cannot assume that such a quantity would be a convex and well-behaved (such as, differentiable) function of the service rates for a policy such as Join-Idle-Queue (JIQ). Second, and more importantly, even if an expression for the sojourn time would be available, it would depend on the service rates of all servers in the system and hence cannot be computed at a single server without communication. Similarly, any unbiased, noisy estimates of the sojourn times cannot be obtained either. For example, the number of tasks a server receives in a particular time interval does capture certain global statistics of the system but is ultimately biased by the service rate of the server itself, which violates crucial assumptions of this method. This is further explained in Section <ref>. We overcome the above difficulties by leveraging the fact that the expected idle times of the servers become equal in the many-server limit, even under heterogeneity. The proof of this fact constitutes the bulk of the technical part of the paper. Using this, we obtain an asymptotically tight (convex) lower bound on the cost function, and show that this lower bound is minimized by a homogeneous system (Theorem <ref>). (c) Distributed optimization. Although there are numerous approaches to distributed optimization problems in the literature, all of them allow some form of (limited) communication between the agents (servers in our case) <cit.>. In contrast, we do not allow any explicit communication between servers. This renders any possibility to arrive at a global optimum solution hopeless in traditional distributed optimization settings. However, our setup differs in one important way: the servers receive tasks via the JIQ load balancing policy that does have some global `signal' about the system, i.e., the policy assigns tasks to idle servers, whenever there are any. The challenge is, therefore, to carefully use these sparse and implicit hints from the load balancing policy to change a server's service rate. For example, under the JIQ policy, if a server is idle and receives a new task very fast, then one of two possibilities could be happening: (i) the system is overloaded and the server should increase its service rate; or (ii) the service rate of that specific server is high enough but the service rate of other servers is too low. The algorithm should be able to distinguish these two possibilities. Our algorithm is inspired by stochastic approximation algorithms with constant step sizes that provide performance guarantees in the asymptotic regime where the step size tends to zero. By exploiting the fact that the expected idle time of servers becomes equal in the many-server limit, we show that the service rate of each server converges to the globally optimal service rate under our algorithm. § MODEL AND MAIN RESULTS We consider a system consisting of n ∈ parallel servers with unit buffers, where server v ∈ [n] has processing rate μ_v^n>0. Tasks arrive as a Poisson process of rate λ n, and have i.i.d. unit-mean exponential processing time requirements. Upon arrival, tasks are either routed to an idle server chosen uniformly at random, if there are any, or dropped otherwise. The goal is for the servers to run at processing rates that attain the optimal cost: inf_^n ∈ℝ_+^n {λ g( 1/[ S^n(∞) ]) + 1/n∑_v ∈ [n] h(μ_v^n) }, where [ S^n(∞) ] is the expected sojourn time of a typical task in steady state (which is a function of the vector of processing rates ^n), g:ℝ_+→ℝ_+ is a decreasing function which represents the cost of large sojourn times of tasks (hence this cost is weighed by the arrival rate) and h:ℝ_+→ℝ_+ is an increasing function which represents the cost of maintaining a specific processing speed. We assume that these functions satisfy the following assumption. We assume that: * g and h are twice continuously differentiable; * There exists σ_g, σ_h > 0, such that g”(x) ≥σ_g and h”(x) ≥σ_h, for all x≥ 0; * g is decreasing, h is increasing, and h'(x) / x is non-decreasing in x; * h'(0)=0; * λ g'(λ) + h'(λ) < 0; * There exist μ_+ > μ_- > 0 such that g'(μ_+) + h'(μ_+)/μ_+≥ 0 and g'(μ_-) + h'(μ_-) (λ + 1/μ_-) ≤ 0. An example of functions g and h that satisfy Assumption <ref> is g(x) = 1/1+x and h(x)=β x^3, where β>0 is a small enough constant.The use of a polynomial function for the energy costs, and in particular of a cubic one, is well supported in the literature <cit.>. Indeed, (i) and (ii) are clearly satisfied, and since g'(x) = -(1+x)^-2 and h'(x)= 3β x^2, (iii) and (iv) are also satisfied for all β>0. Moreover, we have λ g'(λ) + h'(λ) = -λ(1+λ)^-2 + 3βλ^2, which is negative for all β small enough, so (v) is satisfied when choosing β small enough. Finally, we have g'(μ_+) + h'(μ_+)/μ_+ = -(1+μ_+)^-2 + 3βμ_+, which is non-negative for μ_+ sufficiently large for any β>0, and g'(μ_-) + h'(μ_-) (λ + 1/μ_-) = -(1+μ_-)^-2 + 3βμ_- (λμ_- + 1), which is non-positive for μ_- sufficiently small for any β>0. Thus, (vi) is also satisfied. In Assumption <ref>, (i) are (ii) are standard convexity assumptions. However, since the expected sojourn time of jobs is an unknown function of the service rate vector, our objective function may be non-convex, and possibly even discontinuous. Therefore, service rate vectors which achieve the infimum cost may not exists or there may be multiple ones. As a tractable upper bound for the infimum cost, we consider homogeneous systems with rate μ≥ 0, for which the cost function reduces to the convex function λ g(μ) + h(μ), for all n. Constrained to homogeneous systems, it is easily checked that Assumption <ref> (i), (ii), and (iii) imply that the infimum cost is indeed attained at μ^* := μ≥ 0min{λ g(μ) + h(μ) }, with μ^*>λ by Assumption <ref> (v). Therefore, we have inf_^n ∈ℝ_+^n {λ g( 1/[ S^n(∞) ]) + 1/n∑_v ∈ [n] h(μ_v^n) }≤λ g(μ^*) + h(μ^*). Our first result states that the gap in the above inequality vanishes, as n→∞, and that the system is stable when all servers run at rate μ^*. In particular, this means that the homogeneous rate vector where all entries are equal to μ^* asymptotically minimizes the cost of Equation (<ref>), in the limit as n→∞ (i.e., in the many-server limit). Under Assumption <ref>, we have μ^*>λ, and lim_n→∞inf_^n ∈ℝ_+^n {λ g( 1/[ S^n(∞) ]) + 1/n∑_v ∈ [n] h(μ_v^n) } = λ g(μ^*) + h(μ^*). To establish this result, we show that, for a given average rate, a homogeneous rate vector minimizes the cost when n→∞. While the convexity of h easily implies that homogeneous processing rates minimize the second term in Equation (<ref>) for any given average rate, the fact that this also minimizes the first term crucially relies on the fact that the expected idle times of servers become equal as n→∞ (cf. Theorem <ref>). The proof is given in Section <ref>. §.§ Adaptive rate scaling: ODE characterization of sample paths We now present our rate scaling algorithm, which dynamically adjusts the individual servers' processing rates. Since servers are oblivious to the global arrival rate λ and cannot communicate with each other, we design an adaptive algorithm which learns the asymptotically optimal processing rates in a completely distributed way. Our algorithm is designed around the key insight that: the average idle time of each server becomes equal as n →∞ and therefore servers may use their idle times as `signals' (without any explicit exchange of information among servers) to gauge how their service rate compares to the average service rate in the system, and adjust accordingly. Under our algorithm, the service rate of each server is updated as follows: μ_v^n, m(t) = μ_v^n, m(0) - 1/m∫_0^t[g'(μ_v^n,m(s)) + h'(μ_v^n,m(s) ) ( I_v^n,m(s) + 1/μ_v^n,m(s))] s, where m>0 is a tunable hyperparameter, and I_v^n,m(t) denotes the idle time of server v ∈ [n] at time t. That is, if server v is idle at time t, then I_v^n,m(t) is the length of time since the last service completion, and if server v is busy at time t, then I_v^n,m(t) is the length of its last idle period. Moreover, since Assumption <ref> (i) and (iii) imply that the limit lim_x→ 0 h'(x)/x exists and is finite, we use the convention that h'(0)/0=lim_x→ 0 h'(x)/x for the update rule to be well-defined at μ_v^n,m=0. If the buffers are larger than unit size, tasks can be queued and thus processed one after another without the server being idle in between. In this case, we consider the length of the idle period to be zero. This signals that the server might be overloaded, and that it should increase its service rate if it is not too large already. Assumption <ref> (ii), (iii), and (vi), and the convention h'(0)/0=lim_x→ 0 h'(x)/x, imply that g'(0)+h'(0)I_v^n,m+h'(0)/0 ≤ 0 regardless of the value of I_v^n,m. Therefore, the integrated function in Equation (<ref>) is non-positive when μ_v^n,m=0 and thus μ_v^n,m(t) ≥ 0 for all t ≥ 0 and v ∈ [n]. Moreover, Assumption <ref> (vi) implies that g(μ_+)+h'(μ_+)(I_v^n,m+1/μ_+) ≥ g(μ_+)+h'(μ_+)/μ_+ ≥ 0. Therefore, the integrated function in Equation (<ref>) is non-negative when μ_v^n,m=μ_+ and thus, if μ_v^n,m(0)≤μ_+, then μ_v^n,m(t)≤μ_+ for all t≥ 0. We combine these two facts to conclude that, if μ^n,m(0) ∈ [0,μ_+]^n, then μ^n,m(t) ∈ [0,μ_+]^n for all t≥ 0. Note that Equation (<ref>) resembles a continuous-time version of a stochastic approximation algorithm with constant step size 1/m. Thus, the parameter m determines the learning rate of the update rule. In order to analyze the algorithm, we consider the asymptotic regime where m →∞. As m gets larger, the service rates are updated at a slower pace, but with less randomness. Thus, in the spirit of stochastic approximation, if we accelerate time by a factor of m, we obtain a deterministic limiting trajectory with a constant learning rate. This is formalized in the following theorem. If μ^n, m(0) →μ^n(0) weakly as m →∞, then the sequence of stochastic trajectories { (^n,m(mt))_t≥ 0}_m ∈ is relatively compact with respect to the weak topology. Moreover, all limit points (^n(t))_t≥ 0 satisfy μ_v^n(t) = μ_v^n(0) - ∫_0^t [g'(μ_v^n(s)) + h'(μ_v^n(s)) ( _μ^n(t)[ I_v^n(∞) ] + 1/μ_v^n(s))] s, where _μ[ I_v^n(∞) ] denotes the expected idle time of server v ∈ [n] in steady state in a system where the service rate vector is fixed at μ. The proof is based on time-scale separation ideas introduced by Kurtz <cit.>, and it is given in Section <ref>. Since the expected idle time _μ^n[ I_v^n(∞) ] is an unknown function of the rate vector μ^n, uniqueness of solutions to Equation (<ref>) cannot be guaranteed. Without this uniqueness, limiting trajectories may be different (even if they all satisfy Equation (<ref>)), and thus the convergence of the sequence of trajectories cannot be guaranteed either. Fortunately, as we will see, this does not pose a problem for our subsequent analysis, where we prove the convergence of ^n(t) in Equation (<ref>) to one of the asymptotically optimal vector of service rates, as n→∞. Since the instantaneous arrival rate to any idle server is at least λ, regardless of the system occupancy, it follows that _μ[ I_v^n(∞) ] ≤λ. Combining this with Assumption <ref> (vi) we get that g'(μ_-) + h'(μ_-) (_μ_-[ I_v^n(∞) ] + 1/μ_-) ≤ g'(μ_-) + h'(μ_-) (λ + 1/μ_-) ≤ 0. Therefore, the integrated function in Equation (<ref>) is non-negative when μ_v^n(t)=μ_- and thus, if μ_v^n(0) ≥μ_-, then μ_v^n(t) ≥μ_- for all t≥ 0. We combine this with Remark <ref> to conclude that, if μ^n(0) ∈ [μ_-,μ_+]^n, then μ^n(t) ∈ [μ_-,μ_+]^n for all t≥ 0. §.§ Idle-time analysis of large-scale heterogeneous server systems In order to understand the limiting dynamics given by Equation (<ref>), we need to get characterize _μ[ I_v^n(∞) ], which is the expected idle time of a system where the service rates are fixed at μ_v for v ∈ [n]. This poses a major challenge, as the dynamics of a fully heterogeneous system appear to be intractable. In particular, even if the dispatching policy chooses an idle server uniformly at random regardless of its processing rate, the idle time of a server v depends on the number of idle servers in the system conditioned on v being idle, thus breaking the symmetry of the dispatching policy. Although a closed-form expression is out of reach, we prove in Theorem <ref> below that, in the limit as n→∞, the expected idle time for different servers becomes equal. There exists constants c^n(^n) ∈ [0,1], n∈, which depend only on n and ^n such that max_v ∈ [n]|_^n[ I^n_v(∞) ] - 1 - c^n(^n)/λ|→ 0 as n →∞. Moreover, if 1/n∑_v∈[n]μ^n_v ≤λ, we have c^n(^n)=1, and if lim inf_n→∞1/n∑_v∈[n]μ^n_v > λ, we have lim_n→∞λ/max_v∈[n]μ^n_v≤lim inf_n→∞ c^n(^n) ≤lim sup_n→∞ c^n(^n) ≤lim_n→∞λ/min_v∈[n]μ^n_v. The proof of Theorem <ref> is technically involved and is given in Section <ref>. The key intermediate step is a concentration result on the fraction of busy servers. §.§ Asymptotic optimality of the proposed algorithm Although we do not have an explicit expression of the constants c^n(^n) of Theorem <ref> in general, we do have an expression for the case when the system is homogeneous (which is the case for the asymptotically optimal solution). We build upon this result to establish the next crucial lemma, which states that the deterministic trajectories converge to an asymptotically homogeneous system. Let ^n(t) be as defined in Theorem <ref>. Then, max_v,v' ∈ [n]|μ_v^n(t) - μ_v'^n(t)| → 0, as n →∞ and t →∞, regardless of the order in which the two limits are taken. The proof of Lemma <ref> consists of using the bound given in Theorem <ref> to bound the derivative of the difference of the rates, and show that this difference converges to zero. The proof is given in Section <ref>. Before stating our last result, we provide an intuitive explanation as of why theorems <ref> and <ref> imply that the rates of all servers converge to the asymptotically optimal rate μ^*, as n and t go to infinity. First, note that Theorems <ref> and <ref> imply that, for every ε>0, there exist n_0 and t_0 large enough so that max_v ∈ [n]|_^n[ I^n_v(∞) ] - [1 - c^n(^n)]/λ| < ε and max_v,v' ∈ [n]|μ_v^n(t) - μ_v'^n(t)|<ε for all n≥ n_0 and t≥ t_0. With this in mind, consider a system where the expected idle times of different servers are all equal and where the service rates are homogeneous after some time. In this hypothetical system, the update rule now equals μ_v^n(t)/ t = - g'(μ_v^n(t)) - h'(μ_v^n(t)) ( 1 - λ / μ_v^n(t)/λ - 1/μ_v^n(t)) = - g'(μ_v^n(t)) - h'(μ_v^n(t))/λ. Notice that the update rule equals the negative gradient of the cost function, as one would expect in gradient descent. Therefore, we expect the service rate to converge to the optimal solution. This intuition is formalized in the theorem below. Let μ^* be as defined in Theorem <ref> and μ^n(t) be as defined in Theorem <ref>. Then, max_v ∈ [n]|μ_v^n(t) - μ^* |→ 0 as n →∞ and t →∞, where the limits are taken either jointly, or first as n→∞ and then as t→∞. From a high level, Theorem <ref> states that μ^n(t), which is an asymptotic approximation of the original (stochastic) rate vector μ^n,m(t), converges to the optimal values in time, when the number of servers n goes to infinity. This indicates that the stochastic algorithm will approximately solve the optimization problem given in Equation (<ref>) when m and n are large. The proof of Theorem <ref> is given in Section <ref>. § NUMERICAL RESULTS In order to showcase the performance of our algorithm relative to the optimal rates, in this section we present extensive numerical experiments. We choose g(μ) = 1 / μ and h(μ) = 0.1 μ^2 for the cost function, in which case the asymptotically optimal rate is μ^*=√(5λ). Validation of large-system asymptotics. First, we fix λ=0.8 and m=500, and we vary the number of servers from n=2 to n=100. In Figure <ref> we can see that the average service rate in steady state becomes closer to the asymptotically optimal one as n increases, being already quite close for n=25. The reason is that the bias in the estimator of the idle time vanishes as n→∞, and so the servers can accurate learn the optimal service rate. Implications of a constant step-size. On the other hand, note that in Figure <ref> the 5th and 95th percentiles of the rates are bounded away from their average, even as n increases. While this phenomenon is not apparent in our asymptotic analysis, where we started by taking the limit as the step size (1/m) goes to zero, it is the expected behavior of stochastic approximation algorithms with finite step size. Indeed, with such algorithms, the state converges to a steady state distribution around the optimal point, where the variance is an increasing function of the step size. To showcase this, we fix λ=0.8, n=100, and vary the the step size from m=10 to m=500. In Figure <ref> we can see that the 5th and 95th percentiles become closer to the average, as m increases. Since we chose n=100, the average rates almost coincide with the asymptotically optimal one. Influence of the arrival rate. Next, we fix n=100, m=100, and vary the arrival rate from λ=0.025 to λ=1.5. In Figure <ref> we see that, as λ increases, so does the average, in lockstep with the asymptotically optimal service rate, which is μ^*=√(5λ). Moreover, we can see that the 5th and 95th percentiles become closer to the average as λ increases. This is because, for higher values of λ, the derivatives of the objective function around the equilibrium are steeper, and thus the deviations around the average are reduced (interestingly, this is also observed in Stochastic Gradient Descent (SGD) with constant step sizes). Extension to the infinite-buffer scenario. Finally, we verify that our algorithm also works as intended when we have infinite instead of unit buffers. In these simulations we use n=50, m=500, λ=0.8, and i.i.d. initial service rates μ_v(0) ∼Unif[0,2]. In Figures <ref> and <ref> we see that, when the system starts empty, the trajectories of the rate vectors are indistinguishable between the cases of unit buffers and infinite buffers. This is because in the infinite buffer case, after a brief upward excursion in the queue lengths at the beginning while the system is overloaded (cf. Figure <ref>), the maximum queue length is at most one, and so the extra buffer is not utilized. On the other hand, if the system starts with a sufficiently large backlog, it behaves as if it were overloaded until the backlog is handled. In that case, in Figure <ref>, we can see how the rates overshoot their optimal point while the queues remain large, but converge to the optimal rates once the backlog is resolved. Establishing the convergence of the proposed algorithm formally in the infinite buffer case is an interesting future research direction. § PROOFS OF MAIN RESULTS §.§ Proof of Theorem <ref> The proof introduces a measure-valued representation of the process ((X^n,m_v(·),I^n,m_v(·)):v∈ [n]), where X^n,m_v(t)∈{0,1} denotes whether server v is idle or busy at time t, and I^n,m_v(t) is the current idle time of server v at time t. Using this measure-valued representation, the proof then relies on results in <cit.> to establish the tightness of the sequence of these measures, and to argue that the limit points are integrals with respect to certain random measures. Then, in order to characterize these limiting measures we use a martingale decomposition of the pre-limit dynamics on certain test functions, and use Doob's maximal inequality to establish their limits. Let ν^n,m be a random measure on [0, ∞) ×{ 0, 1 }^n × [0, ∞)^n defined as ν^n,m( Γ×x×i) := ∫_Γδ_X^n,m(m s)(x) δ_I^n,m(m s)(i) s. Note that ν^n,m is the empirical measure of the queue length process X^n,m and the idle times I^n,m. The sequence ( ν^n,m )_m ∈ is relatively compact. Fix any K > 0. If I_v^n,m(m t) > K, then server v ∈ [n] has not received any tasks throughout [τ - K, τ], where τ≤ m t is the last time the server was idle or τ = m t if the server is currently idle. However, note that any server receives jobs at least at exponential rate λ. Thus, the probability that I_v^n,m(m t) > K is upper bounded by e^-λ K. Therefore, [ ν^n,m([0, t] ×{ 0, 1 }^n × [0, K]^n) ] = [ ∫_0^t 1_{ I_v^n,m(m s) ≤ K for all v ∈ [n] } s ] = ∫_0^t ( I_v^n,m(m s) ≤ K for all v ∈ [n] ) s ≥∫_0^t ( 1 - ∑_v ∈ [n]( I_v^n,m(m s) > K ) ) s ≥∫_0^t ( 1 - n e^-λ K) s = ( 1 - n e^-λ K) t, for all m ∈. It follows that the sequence ( ν^n,m )_m ∈ is relatively compact by <cit.>. Since ( ν^n,m )_m ∈ is relatively compact by Lemma <ref>, then every subsequence has a convergent subsequence. Let ( ν^n,m )_m ∈ be any such convergent subsequence, where we omit the subscripts of the subsequence for brevity, and let ν^n,m→ν^n as m →∞. Note that μ_v^n,m(m t) - μ_v^n,m(0) = - 1/m∫_0^m t[ g'(μ_v^n,m(s)) + h'(μ_v^n,m(s)) ( I_v^n,m(s) + 1/μ_v^n,m(s)) ] s = - ∫_[0, t] ×{0, 1}^n × [0, ∞)^n[ g'(μ_v^n,m(m s)) + h'(μ_v^n,m(m s)) ( i_v + 1/μ_v^n,m(s)) ] ν^n,m( s ×x×i), and hence, along the convergent subsequence, μ^n,m(m t) →μ^n(t) weakly as m →∞ by the continuous mapping theorem, where μ_v^n(t) = μ_v^n(0) - ∫_[0, t] ×{0, 1}^n × [0, ∞)^n[ g'(μ_v^n(s)) + h'(μ_v^n(s)) ( i_v + 1/μ_v^n(s)) ] ν^n( s ×x×i). Moreover, <cit.> implies that there exists a family of random probability measures {π_t:t≥ 0} on { 0, 1 }^n × [0, ∞)^n such that ν^n([0, t] ×x×i) = ∫_0^t π_s(x×i) s. To complete the proof of Theorem <ref>, we should therefore show that π_t is the stationary distribution of a system where the service rate is fixed at ^n(t). Let f: { 0, 1 }^n × [0, ∞)^n → be bounded and differentiable in its second coordinate. Then, using the convention 0/0=0, we have f(X^n,m(m t), I^n,m(m t)) = f(X^n,m(0), I^n,m(0)) + ∑_v ∈ [n]∫_0^m t∂/∂ I^n,m_v f(X^n,m(s), I^n,m(s)) (1 - X^n,m_v(s)) s + ∑_v ∈ [n]∫_0^m t ( f(X^n,m(s) + e_v, I^n,m(s)) - f(X^n,m(s), I^n,m(s)) ) N_v^(1)( ∫_0^s λ n (1 - X^n,m_v(u))/∑_v' ∈ [n] (1 - X^n,m_v'(u)) u ) + ∑_v ∈ [n]∫_0^m t ( f(X^n,m(s) - e_v, I^n,m(s) - I^n,m_v(s) e_v) - f(X^n,m(s), I^n,m(s) ) N_v^(2)( ∫_0^s μ_v^n,m(u) X^n,m_v(u) u ), where N_v^(1) and N_v^(2) are independent unit-rate Poisson processes and e_v is the n-dimensional unit vector with a one at index v and zeroes everywhere else, for v ∈ [n]. We divide by m and rewrite the third and fourth terms on the right-hand to obtain f(X^n,m(m t), I^n,m(m t))/m = f(X^n,m(0), I^n,m(0))/m + ∑_v ∈ [n]∫_0^t ∂/∂ I^n,m_v f(X^n,m(m s), I^n,m(m s)) (1 - X^n,m_v(m s)) s + ∑_v ∈ [n]∫_0^t ( f(X^n,m(m s) + e_v, I^n,m(m s)) - f(X^n,m(m s), I^n,m(m s)) ) λ n (1 - X^n,m_v(m s))/∑_v' ∈ [n] (1 - X^n,m_v'(m s)) s + ∑_v ∈ [n]∫_0^t ( f(X^n,m(m s) - e_v, I^n,m(m s) - I^n,m_v(m s) e_v) - f(X^n,m(m s), I^n,m(m s)) ) μ_v^n,m(m s) X^n,m_v(m s) s + M^n,m_(1)(m t)/m + M^n,m_(2)(m t)/m, where M^n,m_(1)(t) := ∑_v ∈ [n]∫_0^t ( f(X^n,m(s) + e_v, I^n,m(s)) - f(X^n,m(s), I^n,m(s)) ) M_v^(1)( ∫_0^s λ n (1 - X^n,m_v(u))/∑_v' ∈ [n] (1 - X^n,m_v'(u)) u ), M^n,m_(2)(t) := ∑_v ∈ [n]∫_0^t ( f(X^n,m(s) - e_v, I^n,m(s) - I^n,m_v(s) e_v) - f(X^n,m(s), I^n,m(s)) ) M_v^(2)( ∫_0^s μ_v^n,m(s) X^n,m_v(s) u ), and M_v^(i)(t) := N_v^(i)(t) - t for i = 1, 2 and v ∈ [n]. Let ℱ_t be the natural filtration of (X^n,m(t), I^n,m(t)). Then, it is easily checked that M^n,m_(1)(t) and M^n,m_(2)(t) are square-integrable martingales with respect to ℱ_t. Moreover, [ ⟨ M^n,m_(1), M^n,m_(1)⟩(m t) ] ≤ 2 ‖ f ‖_∞∑_v ∈ [n][ ⟨ N_v^(1), N_v^(1)⟩( ∫_0^m tλ n (1 - X^n,m_v(u))/∑_v' ∈ [n] (1 - X^n,m_v'(u)) s ) ] = 2 ‖ f ‖_∞∑_v ∈ [n][ ∫_0^m tλ n (1 - X^n,m_v(u))/∑_v' ∈ [n] (1 - X^n,m_v'(u)) s ] = 2 ‖ f ‖_∞λ n m t, and [ ⟨ M^n,m_(2), M^n,m_(2)⟩(m t) ] ≤ 2 ‖ f ‖_∞∑_v ∈ [n][ ⟨ N_v^(2), N_v^(2)⟩( ∫_0^m tμ_v(s) X^n,m_v(s) s ) ] = 2 ‖ f ‖_∞∑_v ∈ [n][ ∫_0^m tμ_v^n,m(s) X^n,m_v(s) s ] ≤ 2 ‖ f ‖_∞μ_+ n m t. Therefore, by Doob's maximal inequality, M^n,m_(1)(t) / m → 0 and M^n,m_(2)(t) / m → 0, as m →∞ uniformly on any bounded time interval. Also, f(X^n,m(m t), I^n,m(m t)) / m → 0 and f(X^n,m(0), I^n,m(0)) / m → 0, as m →∞ uniformly on any bounded time interval. Combining these facts with Equation (<ref>), we get ∫_[0, t] ×{ 0, 1 }^n × [0, ∞)^n∑_v ∈ [n]( ∂/∂ i_v f(x, i) (1 - x_v) + ( f(x + e_v, i) - f(x, i) ) λ n (1 - x_v)/∑_v' ∈ [n] (1 - x_v') + ( f(x - e_v, i - i_v e_v) - f(x, i) ) μ_v^n,m(m s) x_v ) ν^n,m( s ×x×i) → 0, as m →∞ uniformly on any bounded time interval, and in particular along the convergent subsequence. As a result, by the continuous mapping theorem, we have ∫_{ 0, 1 }^n × [0, ∞)^n∑_v ∈ [n]( ∂/∂ i_v f(x, i) (1 - x_v) + ( f(x + e_v, i) - f(x, i) ) λ n (1 - x_v)/∑_v' ∈ [n] (1 - x_v') + ( f(x - e_v, i - i_v e_v) - f(x, i) ) μ_v^n(t) x_v ) π_t(x×i) = 0. Thus, <cit.> implies that π_t is the stationary distribution of a system where the service rate is fixed at μ^n(t). Note that the stationary distribution is unique since the process is irreducible. This concludes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> As n is fixed throughout, we will omit the dependence on n in the notation. A server v ∈ [n] thus processes tasks at a fixed rate μ_v. Let δ:= 1/n∑_v ∈ [n]μ_v - λ be the amount of excess processing power (per-server) in the system. We prove the theorem for two cases separately: for the case where the system is in a supercritical regime (i.e., when δ≤ 0), and for the case where it is in a subcritical regime (i.e., when δ > 0). In both cases, we first show that the fraction of busy servers concentrate around a constant (as n→∞), and then argue that this implies that the expected idle times become all equal (as n→∞). §.§.§ Supercritical regime Throughout this subsection we consider the case where δ≤ 0 in Equation (<ref>), that is, where the system is either critically loaded, or overloaded. In this case, we first show that the fraction of busy servers concentrate around 1 (i.e., around the state where all servers are busy), as n→∞. Recall μ_- from Assumption <ref>. Let ε be such that 0< ε≤ 1 / 3. Then, ( 1/n∑_v ∈ [n] X_v(∞) ≤ 1 - 3 ε) ≤( 1 - μ_- ε/λ)^ε n. This results follows by lower bounding the fraction of busy servers by an appropriate birth-death process. The proof is given in Appendix <ref>. Leveraging this concentration result, we obtain an upper bound on the expected idle times. Let ε be such that 0< ε≤ 1 / 3. Then, [ I_v(∞) ] ≤6 ε/λ + ( √(2)/λ + 1/μ_-) ( 1 - μ_- ε/λ)^ε n / 2. We establish Lemma <ref> by expressing the expected idle times as a function of the fraction of busy servers, and using the concentration result of Lemma <ref> to bound it. The proof is given in Appendix <ref>. Taking the limit as n→∞ in Equation (<ref>) yields lim_n→∞[ I_v(∞) ] ≤ 6 ε/λ. Since this holds for all ε>0 small enough, we have lim_n→∞[ I_v(∞) ] =0, which concludes the proof of Theorem <ref> for the supercritical case. §.§.§ Subcritical regime Throughout this subsection we consider the case where δ> 0 in Equation (<ref>), that is, where the system is in a subcritical regime. In this case, we also need to show that the fraction of busy servers concentrate around a constant, as n→∞. However, here the constant is bounded away from zero, and thus the concentration result is significantly more involved. In order to prove our desired concentration result, we define an auxiliary measure-valued process. Let Φ( x) := 1/n∑_v ∈ [n]δ_μ_v( x) and let ϕ̅_t be a measure-valued process on _+ such that ∫_0^∞ f(x) ϕ̅_t( x) = ∫_0^∞ f(x) ϕ̅_0( x) + ∫_0^t ( λ( ∫_0^∞ f(x) Φ( x) - ∫_0^∞ f(x) ϕ̅_s( x) )/1 - ∫_0^∞ϕ̅_s( x) - ∫_0^∞ x f(x) ϕ̅_s( x) ) s, for all f: _+ → [0, 1], with ϕ̅_0( x) = 1/n∑_v ∈ [n]δ_μ_v ( x) X_v(0). It can be easily checked that, for all t≥ 0, the measure ϕ̅_t is a collection of n point masses at μ_v, for v∈[n]. Hence, its existence and uniqueness follow by standard arguments using sample path constructions. We first show that, when n is large, the trajectory of the state of the system is “close" (in a strong sense) to the auxiliary measure-valued process. Suppose that ∫_0^∞ϕ̅_0( x) ≤ 1 - δ/μ_+. Then, for all T ≥ 0, we have sup_f: _+ → [0, 1][ sup_t ∈ [0, T]|1/n∑_v ∈ [n] f(μ_v) X_v(t) - ∫_0^∞ f(x) ϕ̅_t( x) |] ≤√(8 (μ_+ + λ) T/n)exp( ( 6 λμ_+/δ + μ_+ ) T ). The proof of Lemma <ref> consists of considering a martingale decomposition of the state process, showing that the drift is close to our auxiliary process, and using Doob's maximal inequality to bound the corresponding martingale. The proof is given in Appendix <ref>. The next step in the proof of Theorem <ref> for the subcritical case is to use the auxiliary process to obtain a concentration abound on the fraction of idle servers. In order to do this, we first establish the following monotonicity result. Let X^(1)(t) and X^(2)(t) be two copies of the queue length process such that stochastically X^(1)(0) ≤X^(2)(0), where inequality is considered coordinatewise. Then, there exists a joint probability space such that X^(1)(t) ≤X^(2)(t) for all t ≥ 0, almost surely. Lemma <ref> follows from constructing an appropriate coupling between the processes, and showing that the order is maintained across time. The proof is given in Appendix <ref>. Using the monotonicity given in Lemma <ref>, we obtain the following exponential mixing time result. Let X^(1)(t) and X^(2)(t) be two copies of the queue length process such that stochastically X^(1)(0) ≤X^(2)(0), where inequality is considered coordinatewise. Then, there exists a joint probability space such that [ 1/n∑_v ∈ [n]| X_v^(2)(t) - X_v^(1)(t) |] ≤exp( -μ_- t ). Lemma <ref> follows from the monotonicity of Lemma <ref>, and applying Grönwall's inequality on the dynamics of the system. The proof is given in Appendix <ref>. We now state the concentration result on the fraction of busy servers, as mentioned before. Let ε be such that 0 < ε < δ / μ_-. Then, [ |1/n∑_v ∈ [n] X_v(∞) - c() |] ≤(1 + √(8 (μ_+ + λ) log(n)/α))n^-μ_-/α, where α := 2 (μ_- + 6 λμ_+/εμ_- + μ_+) and c() := ∫_0^∞ϕ̅_log(n)/α( x), for ϕ̅_t as defined in Equation (<ref>), with ϕ̅_0( x) = 0. Lemma <ref> is established by considering two coupled systems, one starting empty and one starting in steady state. On the one hand, Lemma <ref> implies that these two process converge to each other exponentially fast in time. On the other hand, Lemma <ref> implies that the system that starts empty is close to the integral of the measure-valued auxiliary process in Equation (<ref>) (which is deterministic), for n sufficiently large. Therefore, combining these two results, we bound the distance between the fraction of servers in steady state, and the integral of the auxiliary process at an appropriately chosen time (at which point it is equal to c()). The proof is given in Appendix <ref>. Before stating and proving the main theorem for the subcritical case, we first need to establish that the probability of all servers being busy is exponentially small in n. Let ε be such that 0<ε≤δ / (3 μ_+). Then, ( 1/n∑_v ∈ [n] X_v(∞) ≥ 1 - ε) ≤( 1 - μ_+ ε/λ + μ_+ ε)^ε n. Similarly to Lemma <ref>, we establish Lemma <ref> by coupling the system with an appropriate birth-death process, and establishing the result for this simpler process. The proof is given in Appendix <ref>. Leveraging the concentration results in Lemmas <ref> and <ref>, in Lemma <ref> below we obtain a bound on how much the expected delays deviate from appropriate constants. Let ε be such that 0< ε≤δ / μ_-. Then, |[ I_v(∞) ] - 1 - c()/λ|≤2 ε/λ + ( 3 √(2)μ_+/ελμ_- + 3 μ_+/εμ_-^2) √(1 + √(8 (μ_+ + λ) log(n)/α)/n^μ_-/α) + ( √(2) n/λ + n/μ_-) ( 1 - εμ_-/3 λ + εμ_-)^εμ_- n / (6 μ_+), where c() and α are as defined in Lemma <ref>. The proof of Lemma <ref> is given in Appendix <ref>. Now, taking the limit as n→∞ in Equation (<ref>) yields lim_n→∞|[ I_v(∞) ] - 1 - c()/λ|≤2 ε/λ. Since this holds for all ε>0 small enough, we have lim_n→∞[ I_v(∞) ] = 1 - c()/λ. The only thing left in the proof of Theorem <ref> is to obtain the bounds on the constants c(). For this purpose, we have the following result. Let ϕ̅_t be as defined in Lemma <ref>. Then, ∫_0^∞ϕ̅_t( x) ≥λ/max_v ∈ [n]μ_v - |∫_0^∞ϕ̅_0( x) - λ/max_v ∈ [n]μ_v|exp(-max_v ∈ [n]μ_v t) ∫_0^∞ϕ̅_t( x) ≤λ/min_v ∈ [n]μ_v + |∫_0^∞ϕ̅_0( x) - λ/min_v ∈ [n]μ_v|exp(-min_v ∈ [n]μ_v t). The proof of Lemma <ref> consists of defining simple upper and lower bounds for the process of interest and bounding them, and it is given in Appendix <ref>. Finally, Lemma <ref> states that c()=∫_0^∞ϕ̅_log(n)/α(dx), with ∫_0^∞ϕ̅_0(dx)=0. Then, applying Lemma <ref> for t=log(n)/α and ∫_0^∞ϕ̅_0(dx)=0, and taking the limit as n→∞ yields bounds on c(). This completes the proof of Theorem <ref>. §.§ Proof of Lemma <ref> Theorem <ref> implies that there exist constants 0 ≤ c^n(^n) ≤ 1 and a function ε(n)∈ o(1) such that |_^n[ I_v^n(∞) ] - 1 - c^n(^n)/λ|≤ε(n), for all n ≥ 1. Also, as discussed in Remark <ref>, ^n ∈ [μ_-, μ_+]^n. Then, for all n ≥ 1 and t ≥ 0, / tμ_v^n(t) = -g'(μ_v^n(t)) - h'(μ_v^n(t)) ( _μ^n(t)[I_v^n(∞)] + 1/μ_v^n(t)) = -g'(μ_v^n(t)) - h'(μ_v^n(t)) ( 1 - c^n(μ^n(t))/λ + 1/μ_v^n(t)) ±ε(n) ‖ h' ‖_∞, where x =y± z means that x∈ [y-z, y+z]. Therefore, since Assumption <ref> implies that g' and h' are increasing, and h'(x)/x is non-decreasing, we have / t( max_v ∈ [n]μ_v^n(t) - min_v ∈ [n]μ_v^n(t) ) ≤ - ( g'( max_v ∈ [n]μ_v^n(t) ) - g'( min_v ∈ [n]μ_v^n(t) ) ) - 1 - c^n(μ^n(t))/λ( h'( max_v ∈ [n]]μ_v^n(t) ) - h'( min_v ∈ [n]μ_v^n(t) ) ) + 2 ε(n) ‖ h' ‖_∞ ≤ -( σ_g + σ_h ( 1 - c^n(μ^n(t)) )/λ) ( max_v ∈ [n]μ_v^n(t) - min_v ∈ [n]μ_v^n(t) ) + 2 ε(n) ‖ h' ‖_∞. It follows that max_v ∈ [n]μ_v^n(t) - min_v ∈ [n]μ_v^n(t) ≤exp( -( σ_g + σ_h ( 1 - c^n(μ^n(t)) )/λ) t ) + 2 ε(n) ‖ h' ‖_∞/σ_g + σ_h ( 1 - c^n(μ^n(t)) )/λ. Taking the limit as t and n goes to infinity concludes the proof of Lemma <ref>. §.§ Proof of Theorem <ref> Lemma <ref> implies that there exists a function ϵ_d(n,t) that converges to zero as n and t go to infinity (regardless of the order in which these limits are taken), such that max_v ∈ [n]μ_v^n(t) - min_v ∈ [n]μ_v^n(t) ≤ε_d(n,t), for all n ≥ 1 and t ≥ 0. Now, we claim that there exist a constant c_1 > 0 and a function ε(n,t) that converges to zero as n and t go to infinity (regardless of the order in which these limits are taken), such that / t|λ g'(μ_v^n(t)) + h'(μ_v^n(t)) |≤ - c_1 |λ g'(μ_v^n(t)) + h'(μ_v^n(t)) | + ε(n,t), for all n and t sufficiently large, as long as μ_v^n(t)≠μ^* (which is the only value for which λ g'(μ_v^n(t)) + h'(μ_v^n(t))=0). This claim is sufficient to complete the proof of Theorem <ref>. To prove the claim, we distinguish again whether the system is in the supercritical or the subcritical regime. That is, we distinguish whether δ in Equation (<ref>) is non-positive or positive, respectively. §.§.§ Supercritical regime. In this subsection, we prove Equation (<ref>) in the supercritical regime. Note that Theorem <ref> implies that there exists a function ε_I(n) with ε_I(n)→ 0 as n→∞ such that _^n[ I_v^n(∞) ] ≤ε_I(n), for all n ≥ 1. Moreover, since δ≤ 0, we have μ_v^n(t) ≤min_v ∈ [n]μ_v^n(t) + ε_d(n,t) ≤λ + ε_d(n,t) ≤λ + (1 + μ_-) ε_d(n,t). Therefore, we have / tμ_v^n(t) = - g'(μ_v^n(t)) - h'(μ_v^n(t)) ( _μ^n(t)[ I_v^n(∞) ] + 1/μ_v^n(t)) ≥ - g'(μ_v^n(t)) - ‖ h' ‖_∞ε_I(n) - h'(μ_v^n(t))/μ_v^n(t) ≥ -g'(μ_v^n(t)) - ‖ h' ‖_∞ε_I(n) - h'(λ + (1 + μ_-) ε_d(n,t))/λ + (1 + μ_-) ε_d(n,t) ≥ -g'(λ) - ‖ g”‖_∞ (1 + μ_-) ε_d(n,t) - ‖ h' ‖_∞ε_I(n) - h'(λ)/λ - ( ‖ h”‖_∞/λ - h'(λ)/λ (λ + (1 + μ_-) ε_d(n,t))) (1 + μ_-) ε_d(n,t), where the first inequality follows from Equation (<ref>), the second one follows from Equation (<ref>) and the fact that h'(x) / x is non-decreasing in x, and the third one follows from the convexity of g and h. Also, combining the fact that λ g'(λ) + h'(λ) < 0 by Assumption <ref> with Equation (<ref>), we get that λ g'(μ_v^n(t)) + h'(μ_v^n(t)) < 0, for all n and t sufficiently large. Combining this with Equation (<ref>), we obtain / t|λ g'(μ_v^n(t)) + h'(μ_v^n(t)) | = - ( λ g”(μ_v^n(t)) + h”(μ_v^n(t)) ) / tμ_v^n(t) ≤ - ( σ_g + σ_h/λ) |λ g'(λ) + h'(λ) | + ‖ g”‖_∞ (1 + μ_-) ε_d(n,t) + ‖ h' ‖_∞ε_I(n) + ( ‖ h”‖_∞/λ - h'(λ)/λ (λ + (1 + μ_-) ε_d(n,t))) (1 + μ_-) ε_d(n,t) ≤ - ( σ_g + σ_h/λ) |λ g'(μ_v^n(t)) + h'(μ_v^n(t)) | + ‖ g”‖_∞ (1 + μ_-) ε_d(n,t) + ‖ h' ‖_∞ε_I(n) + ( ‖ h”‖_∞/λ - h'(λ)/λ (λ + (1 + μ_-) ε_d(n,t))) (1 + μ_-) ε_d(n,t), for all n and t sufficiently large, where the second inequality follows since g'(x) and h'(x) are continuous and μ_v^n(t) is bounded, which completes the proof of the claim. §.§.§ Subcritical regime. In this subsection, we prove Equation (<ref>) in the subcritical regime. Theorem <ref> implies that there exists constants 0 ≤ c^n(^n) ≤ 1 and a function ε_I(n) with ε_I(n)→ 0 as n→∞ such that |_^n[ I_v^n(∞) ] - 1 - c^n(^n)/λ|≤ε_I(n), with λ/max_v ∈ [n]μ^n_v - ε_I(n) ≤ c^n(^n) ≤λ/min_v ∈ [n]μ^n_v + ε_I(n), for all n ≥ 1. Then, | c^n(^n(t)) - λ/μ_v^n(t)| ≤| c^n(^n(t)) - λ/max_v' ∈ [n]μ_v'^n(t)| + |λ/max_v' ∈ [n]μ_v'^n(t) - λ/μ_v^n(t)| ≤ 2 |λ/min_v' ∈ [n]μ_v'^n(t) - λ/max_v' ∈ [n]μ_v'^n(t)| + ε_I(n) ≤2 λε_d(n,t)/μ_-^2 + ε_I(n), for all v ∈ [n]. Therefore, |_^n(t)[ I_v^n(∞) ] - 1 - λ / μ_v^n(t)/λ|≤|_μ^n(t)[ I_v^n(∞) ] - 1 - c^n(^n)/λ| + 1/λ| c^n(^n) - λ/μ_v^n(t)| ≤2 ε_d(n,t)/μ_-^2 + 2ε_I(n)/λ, and hence, / tμ_v^n(t) = - g'(μ_v^n(t)) - h'(μ_v^n(t)) ( _μ^n(t)[ I_v^n(∞) ] + 1/μ_v^n(t)) = - g'(μ_v^n(t)) - h'(μ_v^n(t))/λ± 2( ε_d(n,t)/μ_-^2 + ε_I(n)/λ) ‖ h' ‖_∞, where x =y± z means that x∈ [y-z, y+z].. Therefore, / t|λ g'(μ_v^n(t)) + h'(μ_v^n(t)) | = ±( λ g”(μ_v^n(t)) + h”(μ_v^n(t)) ) / tμ_v^n(t) ≤ -( σ_g + σ_h/λ) |λ g'(μ_v^n(t)) + h'(μ_v^n(t)) | + 2( ε_d(n,t)/μ_-^2 + ε_I(n)/λ) ‖ h' ‖_∞, for all n ≥ 1 and t ≥ 0, which completes the proof of the claim. §.§ Proof of Theorem <ref> First note that inf_^n ∈ℝ_+^n {λ g( 1/[ S^n(∞) ]) + 1/n∑_v ∈ [n] h(μ_v^n) } = inf_μ≥ 0inf_^n ∈ℝ_+^n : 1/n∑_v∈[n]μ_v^n = μ{λ g( 1/[ S^n(∞) ]) + 1/n∑_v ∈ [n] h(μ_v^n) } ≥inf_μ≥ 0inf_^n ∈ℝ_+^n : 1/n∑_v∈[n]μ_v^n = μ{λ g( 1/[ S^n(∞) ]) } + inf_^n ∈ℝ_+^n : 1/n∑_v∈[n]μ_v^n = μ{1/n∑_v ∈ [n] h(μ_v^n) } = inf_μ≥ 0inf_^n ∈ℝ_+^n : 1/n∑_v∈[n]μ_v^n = μ{λ g( 1/[ S^n(∞) ]) } + h(μ), where the last equality is due to the fact that h is nondecreasing and convex. To show that the first term is also minimized when ^n is homogeneous in the limit as n→∞, we argue that the sojourn time is asymptotically minimized in an homogeneous system. To show this, first note that for n∈ and any fixed service-rate vector ^n, a renewal argument implies that 𝔼[X^n_v(∞)] = 1/(1+μ^n_v𝔼_μ[I^n_v(∞)]), where 𝔼[X^n_v(∞)] is the fraction of time that server v is busy in steady state. Moreover, Theorem <ref> implies that there exists a function ε_I(n) with ε_I(n)→ 0 as n→∞ such that 𝔼[X^n_v(∞)]=1/1+μ^n_v𝔼_μ[I^n_v(∞)]≤λ/λ+μ^n_v[1 - c^n(^n)] + ε_I(n). Furthermore, in steady state we have that 1/n∑_v∈[n]μ^n_v 𝔼[X^n_v(∞)] = λ_n(^n), where λ_n(^n) is the per-server effective arrival rate to the system. Recall μ=1/n∑_v∈[n]μ_v^n and note that Equation (<ref>) yields λ_n(^n) = 1/n∑_v∈[n]μ^n_v 𝔼[X^n_v(∞)] ≤1/n∑_v∈[n]μ^n_v (λ/λ+μ^n_v[1 - c^n(^n)] + ε_I(n) ) ≤1/n∑_v∈[n]μ(λ/λ+μ^n_v[1 - c^n(^n)] + ε_I(n) ) ≤1/n∑_v∈[n]μ ( 𝔼[X^n_v(∞)] + 2ε_I(n) ), and thus 1/n∑_v∈[n]𝔼[X^n_v(∞)] ≥λ_n(^n)/μ - 2ε_I(n). On the other hand, Little's law implies that λ_n(^n) 𝔼[S^n(∞)] = 1/n∑_v∈[n]𝔼[X^n_v(∞)], and Lemmas <ref> and <ref> imply that λ_n(^n) ≥min{λ, μ} - ε_r(n,μ), where ε_r(n,μ)∈ o(1) as a function of n. Therefore, it follows that 𝔼[S^n(∞)] ≥1/μ - 2ε_I(n)/min{λ, μ} - ε_r(n,μ), and thus inf_^n ∈ℝ_+^n : 1/n∑_v∈[n]μ_v^n = μ{ g( 1/[ S^n(∞) ]) }≥ g(μ) - ε_T(n,μ), where ε_T(n,μ) converges uniformly (over all μ≥ 0) to zero, as n→∞. Combining this with Equation (<ref>) and taking the limit as n→∞, we obtain lim inf_n→∞inf_^n ∈ℝ_+^n {λ g( 1/[ S^n(∞) ]) + 1/n∑_v ∈ [n] h(μ_v^n) } ≥lim inf_n→∞inf_μ≥ 0{λ g(μ) - λε_T(n,μ) + h(μ) } = inf_μ≥ 0{λ g(μ) + h(μ) }, where the optimization problem in the right-hand side is now convex, and its infimum is attained at μ^*. Finally, Assumption <ref> (v) implies that μ^* > λ. § ACKNOWLEDGEMENTS The work was partially supported by the NSF grant CIF-2113027. - apalike § PROOFS OF AUXILIARY RESULTS FOR THEOREM <REF> Note that the evolution of the queue length process can be written as ∑_v ∈ [n] X_v(t) = N_a( ∫_0^t λ n 1_{∑_v ∈ [n] X_v(s) < n } s ) - N_d( ∫_0^t ∑_v ∈ [n]μ_v X_v(s) s ), where N_a and N_d are independent unit-rate Poisson processes. Let k := ⌊ (1 - 2 ε) n ⌋ and Y(t) be a Markov process defined as Y(t) = N_a'( ∫_0^t λ n 1_{ Y(s) < k } s ) - N_d'( ∫_0^t ( ∑_v ∈ [n]μ_v - 2 μ_- ε n ) 1_{ Y(s) > 0 } s ), where N_a' and N_d' are independent unit-rate Poisson processes. Note that if ∑_v ∈ [n] X_v(t) ≤ k then ∑_v ∈ [n]μ_v X_v(t) = ∑_v ∈ [n]μ_v - ∑_v ∈ [n]μ_v (1 - X_v(t)) ≤∑_v ∈ [n]μ_v - 2 μ_- ε n and hence it is not hard to verify that stochastically Y(t) ≤∑_v ∈ [n] X_v(t) for all t ≥ 0, given that the inequality holds at t=0. Moreover, Y(t) is a simple birth-death process and its steady state satisfies ( Y(∞) = i ) = ρ^i (ρ - 1)/ρ^k+1 - 1, i = 0,…, k, where ρ := λ n/∑_v ∈ [n]μ_v - 2 μ_- ε n≥λ/λ - μ_- ε > 1. Let l := ⌊ (1 - 3 ε) n ⌋. Then, leveraging the above stochastic ordering, we get ( ∑_v ∈ [n] X_v(∞) ≤ l ) ≤( Y(∞) ≤ l ) = ∑_i = 0^l ρ^i (ρ - 1)/ρ^k+1 - 1 = ρ^l+1 - 1/ρ^k+1 - 1≤1/ρ^k - l≤( 1 - μ_- ε/λ)^ε n. To prove Lemma <ref>, we will need the following technical result: Fix any v ∈ [n]. Then, [ λ n (1 - X_v(∞))/∑_v' ∈ [n] (1 - X_v'(∞))] = [ μ_v X_v(∞) ], [ μ_v X_v(∞) I_v(∞) ] = [ 1 - X_v(∞) ], [ λ n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] = [ 1 - X_v(∞) ]. Consider a system initiated at the steady state, i.e., X(0) = X(∞). Then, / t[ X_v(t) ] = [ λ n (1 - X_v(t))/∑_v' ∈ [n] (1 - X_v'(t)) - μ_v X_v(t) ] = 0, and / t[ I_v(t) ] = [ 1 - X_v(t) - μ_v X_v(t) I_v(t) ] = 0, and / t[ I_v(t) (1 - X_v(t)) ] = [ 1 - X_v(t) - λ n (1 - X_v(t)) I_v(t)/∑_v' ∈ [n] (1 - X_v'(t))] = 0, which show the three equalities in the lemma, respectively. Let E := {∑_v ∈ [n] (1 - X_v(∞)) ≤ 3 ε n }. Then, [ (1 - X_v(∞)) I_v(∞) ] = [ 1/λ∑_v' ∈ [n] (1 - X_v'(∞)) λ (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))1_E ] + [ (1 - X_v(∞)) I_v(∞) 1_E^c] ≤[ 3 ε/λλ n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] + √([ I_v(∞)^2 ] (E^c)) ≤3 ε/λ + √(2)/λ( 1 - μ_- ε/λ)^ε n / 2, where the first inequality follows by Cauchy-Schwartz and the second inequality follows by Lemma <ref>, the fact that I_v(∞) is stochastically dominated by an Exp(λ) random variable and Lemma <ref>. Also, [ X_v(∞) I_v(∞) ] = [ 1 - X_v(∞)/μ_v] ≤3 ε/λ + 1/μ_-( 1 - μ_- ε/λ)^ε n / 2, where the equality follows from Lemma <ref> and the inequality follows along similar lines as Equation (<ref>). Therefore, the proof follows by adding equations (<ref>) and (<ref>). The following algebraic lemma is used in the proof of Lemma <ref>. Let 0 ≤ y_1 ≤ x_1 and 0 ≤ y_2 ≤ x_2. Then, |y_1/x_1 - y_2/x_2|≤1/min{ x_1, x_2 }( | x_1 - x_2 | + | y_1 - y_2 |). Let x_1 ≤ x_2 without loss of generality. Then, by the mean value theorem, 1/x_1 - 1/x_2 = - x_1 - x_2/ξ^2, where ξ∈ [x_1, x_2]. Therefore, |y_1/x_1 - y_2/x_2|≤|y_1/x_1 - y_1/x_2| + |y_1/x_2 - y_2/x_2|≤y_1/x_1^2| x_1 - x_2 | + 1/x_2| y_1 - y_2 |≤1/x_1( | x_1 - x_2 | + | y_1 - y_2 |), which completes the proof. Let f: _+ → [0, 1] be any function. The queue length process evolves as 1/n∑_v ∈ [n] f(μ_v) X_v(t) = 1/n∑_v ∈ [n] f(μ_v) X_v(0) + 1/n∑_v ∈ [n] f(μ_v) N_v^(1)( ∫_0^t λ n (1 - X_v(s))/∑_v' ∈ [n] (1 - X_v'(s)) s ) - 1/n∑_v ∈ [n] f(μ_v) N_v^(2)( ∫_0^t μ_v X_v(s) s ), where N_v^(1) and N_v^(2) are independent unit-rate Poisson processes for v ∈ [n]. We rewrite the second and third terms on the right-hand side of Equation (<ref>) to get 1/n∑_v ∈ [n] f(μ_v) X_v(t) = 1/n∑_v ∈ [n] f(μ_v) X_v(0) + 1/n∑_v ∈ [n] f(μ_v) ∫_0^t λ n (1 - X_v(s))/∑_v' ∈ [n] (1 - X_v'(s)) s + M_f^(1)(t) - 1/n∑_v ∈ [n] f(μ_v) ∫_0^t μ_v X_v(s) s - M_f^(2)(t), where M_f^(1)(t) := 1/n∑_v ∈ [n] f(μ_v) M_v^(1)( ∫_0^t λ n (1 - X_v(s))/∑_v' ∈ [n] (1 - X_v'(s)) s ), M_f^(2)(t) := 1/n∑_v ∈ [n] f(μ_v) M_v^(2)( ∫_0^t μ_v X_v(s) s ), and M_v^(i)(t) := N_v^(i)(t) - t for i = 1, 2 and v ∈ [n]. Let ℱ_t be the natural filtration for X(t). Then, it is easy to check that M_f^(1)(t) and M_f^(2)(t) are square-integrable martingales with respect to ℱ_t. Moreover, we have [ ⟨ M_f^(1), M_f^(1)⟩(t) ] = 1/n^2∑_v ∈ [n] f(μ_v)^2 [ ⟨ N_v^(1), N_v^(1)⟩( ∫_0^t μ_v X_v(s) s ) ] = 1/n^2∑_v ∈ [n] f(μ_v)^2 [ ∫_0^t μ_v X_v(s) s ] ≤μ_+ t/n, and [ ⟨ M_f^(2), M_f^(2)⟩(t) ] = 1/n^2∑_v ∈ [n] f(μ_v)^2 [ ⟨ N_v^(2), N_v^(2)⟩( ∫_0^t λ n (1 - X_v(s))/∑_v' ∈ [n] (1 - X_v'(s)) s ) ] = 1/n^2∑_v ∈ [n] f(μ_v)^2 [ ∫_0^t λ n (1 - X_v(s))/∑_v' ∈ [n] (1 - X_v'(s)) s ] ≤λ t/n. Now, we claim the following: For all t ≥ 0, we have ∫_0^∞ϕ̅_t( x) ≤ 1 - δ / μ_+. Note that by definition, ∫_0^∞ϕ̅_t( x) is differentiable as a function of t. Then, if t is any time such that ∫_0^∞ϕ̅_t( x) = 1 - δ / μ_+, we have / t∫_0^∞ϕ̅_t( x) = λ( ∫_0^∞Φ( x) - ∫_0^∞ϕ̅_t( x) )/1 - ∫_0^∞ϕ̅_t( x) - ∫_0^∞ x ϕ̅_t( x) = ( λ - ∫_0^∞ x Φ( x) ) + ( ∫_0^∞ x Φ( x) - ∫_0^∞ x ϕ̅_t( x) ) ≤ -δ + μ_+ ( 1 - ∫_0^∞ϕ̅_t( x) ) = 0. Thus, the claim follows. Next, let us define d_f(t) := |1/n∑_v ∈ [n] f(μ_v) X_v(t)- ∫_0^∞ f(x) ϕ̅_t( x) |. Then, Lemma <ref>, Equation (<ref>), and Claim <ref> imply that d_f(t) ≤∫_0^t λ|1/n∑_v ∈ [n] f(μ_v) (1 - X_v(s))/1/n∑_v ∈ [n] (1 - X_v(s)) - ∫_0^∞ f(x) Φ( x) - ∫_0^∞ f(x) ϕ̅_s( x)/1 - ∫_0^∞ϕ̅_s( x)| s + ∫_0^t |1/n∑_v ∈ [n]μ_v f(μ_v) X_v(s) - ∫_0^∞ x f(x) ϕ̅_s( x) | s + | M_f^(1)(t) | + | M_f^(2)(t) | ≤∫_0^t λ( 2 μ_+ (d_1(s) + d_f(s))/δ + 1_{1/n∑_v ∈ [n] (1 - X_v(s)) < δ/2 μ_+}) s + ∫_0^t μ_+ d_g(s) s + | M_f^(1)(t) | + | M_f^(2)(t) | ≤∫_0^t λ( 2 μ_+ (d_1(s) + d_f(s))/δ + 1_{ d_1(s) > δ/2 μ_+}) + μ_+ d_g(s) s + | M_f^(1)(t) | + | M_f^(2)(t) | ≤∫_0^t λ( 2 μ_+ (2 d_1(s) + d_f(s))/δ) + μ_+ d_g(s) s + | M_f^(1)(t) | + | M_f^(2)(t) |, where g(x) := x f(x) / μ_+ for x ∈_+. Then, sup_f: _+ → [0, 1][ sup_t ∈ [0, T] d_f(t) ] ≤∫_0^T ( 6 λμ_+ /δ + μ_+ ) sup_f: _+ → [0, 1][ sup_s ∈ [0, t] d_f(s) ] t + sup_f: _+ → [0, 1][ sup_t ∈ [0, T]| M_f^(1)(t) |] + sup_f: _+ → [0, 1][ sup_t ∈ [0, T]| M_f^(2)(t) |]. Note that, by Doob's maximal inequality and equations (<ref>) and (<ref>), we have [ sup_t ∈ [0, T]| M_f^(1)(t) |] ≤√([ sup_t ∈ [0, T] M_f^(1)(t)^2 ])≤√(4 [ M_f^(1)(T)^2 ])≤√(4 μ_+ T/n), and [ sup_t ∈ [0, T]| M_f^(2)(t) |] ≤√([ sup_t ∈ [0, T] M_f^(2)(t)^2 ])≤√(4 [ M_f^(2)(T)^2 ])≤√(4 λ T/n). Therefore, combining equations (<ref>), (<ref>) and (<ref>), and then using Grönwall's inequality completes the proof of Lemma <ref>. We will construct a joint probability space such that the arrival and potential departure epochs are coupled in the two systems. Then we will prove the statement by induction on the coupled arrival and departure epochs. Let t ≥ 0 be an arrival epoch and assume that X^(1)(t-) ≤X^(2)(t-) before the arrival. Define S^(i) := { v ∈ [n] : X_v^(i)(t-) = 0 } to be the set of idle servers in system i for i = 1, 2 and note that S^(2)⊆ S^(1) by the induction hypothesis. If S^(1) = ∅ or S^(2) = ∅, then X^(1)(t) ≤X^(2)(t) is trivially maintained. Otherwise, we assign the idle servers in S^(2) an index from 1 to | S^(2)| and we assign the idle servers in S^(1)∖ S^(2) an index from | S^(2)| + 1 to | S^(1)| (if any). Let U be a uniform [0, 1] random variable, independent across time epochs and shared between the two systems. Now the task assignment decision (which depends on U in that epoch) is taken in the two systems as follows: * In system 1, we assign the task to the j-th idle server for j = 1, …, | S^(1)| if and only if U ∈[ j-1/| S^(1)|, j/| S^(1)|). * In system 2, we assign the task to the j-th idle server for j = 1, …, | S^(2)| if and only if U ∈[ j-1/| S^(1)|, j/| S^(1)|) ∪[ | S^(2)|/| S^(1)| + (j-1) ( 1/| S^(2)| - 1/| S^(1)|), | S^(2)|/| S^(1)| + j ( 1/| S^(2)| - 1/| S^(1)|) ). Note that the probability that any idle server is picked equals 1 / | S^(1)| in system 1 and 1 / | S^(2)| in system 2 as required. If U < | S^(2)| / | S^(1)|, then the task is routed to the same server in both systems and hence X^(1)(t) ≤X^(2)(t) is trivially maintained. Now, if U ≥| S^(2)| / | S^(1)|, then the task is routed to a server v ∈ S^(1)∖ S^(2) in system 1. This means that server v is already busy in system 2 and hence X^(1)(t) ≤X^(2)(t) is maintained. The arrival in system 2 only increases the queue lengths in system 2 and hence does not invalidate the inequality. We also synchronize the potential departure epochs in server v in the two systems for all v∈ [n]. Let t ≥ 0 be such a potential departure epoch at server v and assume that X^(1)(t-) ≤X^(2)(t-) before the departure. After the departure, X_v^(1)(t) = X_v^(2)(t) = 0 and hence the inequality X^(1)(t-) ≤X^(2)(t-) is trivially maintained. Note that the occupancy processes in the two systems evolve as ∑_v ∈ [n] X_v^(i)(t) = ∑_v ∈ [n] X_v^(i)(0) + N_a^(i)( ∫_0^t λ n 1_{∑_v ∈ [n] X_v^(i)(s) < n } s ) - N_d^(i)( ∫_0^t ∑_v ∈ [n]μ_v X_v^(i)(s) s ), where N_a^(i) and N_d^(i) are independent unit-rate Poisson processes for i = 1, 2. Therefore, using Fubini's theorem, [ ∑_v ∈ [n] X_v^(i)(t) ] = [ ∑_v ∈ [n] X_v^(i)(0) ] + ∫_0^t λ n ( ∑_v ∈ [n] X_v^(i)(s) < n ) s - ∫_0^t [ ∑_v ∈ [n]μ_v X_v^(i)(s) ] s, for i = 1, 2. We let the two processes be defined on the joint probability space defined in Lemma <ref>. This implies that almost surely, X^(1)(t) ≤X^(2)(t) for all t ≥ 0 and hence, ( ∑_v ∈ [n] X_v^(2)(t) < n ) ≤( ∑_v ∈ [n] X_v^(1)(t) < n ) for all t ≥ 0. Further, note that the coupling also ensures that ∑_v ∈ [n]( X_v^(2)(t) - X_v^(1)(t) ) is nonincreasing and it decreases by 1 whenever there is a departure from system 2 but not from system 1. Thus, [ ∑_v ∈ [n]( X_v^(2)(t) - X_v^(1)(t) ) ] ≤[ ∑_v ∈ [n]( X_v^(2)(0) - X_v^(1)(0) ) ] - ∫_0^t [ ∑_v ∈ [n]μ_v ( X_v^(2)(s) - X_v^(1)(s) ) ] s ≤ n - ∫_0^t μ_- [ ∑_v ∈ [n]( X_v^(2)(s) - X_v^(1)(s) ) ] s. Therefore, we have [ ∑_v ∈ [n]( X_v^(2)(t) - X_v^(1)(t) ) ] ≤ n exp(-μ_- t). Dividing by n, the result follows. We consider two copies of the queue length process such that X^(1)(0) = 0 and X^(2)(0) = X^(2)(∞) and defined on the joint probability space of Lemma <ref>. Then, Lemma <ref> implies that [ 1/n∑_v ∈ [n]| X_v^(2)(t) - X_v^(1)(t) |] ≤exp( -μ_- t ). Also, Lemma <ref> implies that [ |1/n∑_v ∈ [n] X_v^(1)(t) - ∫_0^∞ϕ̅_t( x) |] ≤√(8 (μ_+ + λ) t/n)exp( ( 6 λμ_+/δ + μ_+ ) t ) ≤√(8 (μ_+ + λ) t/n)exp( ( 6 λμ_+/εμ_- + μ_+ ) t ). Let t = log(n) / α. Then, [ |1/n∑_v ∈ [n] X_v^(2)(t) - c |] ≤exp( -μ_- t ) + √(8 (μ_+ + λ) t/n)exp( ( 6 λμ_+/εμ_- + μ_+ ) t ) = 1 + √(8 (μ_+ + λ) log(n)/α)/n^μ_-/α. Since X^(2)(0) = X^(2)(∞), we have X^(2)(t) = X^(2)(∞), and the result follows. As before, note that the occupancy process, starting from the empty state, evolves as ∑_v ∈ [n] X_v(t) = N_a( ∫_0^t λ n 1_{∑_v ∈ [n] X_v(s) < n } s ) - N_d( ∫_0^t ∑_v ∈ [n]μ_v X_v(s) s ) where N_a and N_d are independent unit-rate Poisson processes. Let k := ⌊ (1 - 2 ε) n ⌋ and let Y(t) be a Markov process defined as Y(t) = N_a'( ∫_0^t λ n 1_{ Y(s) < n } s ) - N_d'( ∫_0^t ( ∑_v ∈ [n]μ_v - 2 μ_+ ε n ) 1_{ Y(s) > k } s ), where N_a' and N_d' are independent unit-rate Poisson processes. Note that if ∑_v ∈ [n] X_v(t) > k then ∑_v ∈ [n]μ_v X_v(t) = ∑_v ∈ [n]μ_v - ∑_v ∈ [n]μ_v (1 - X_v(t)) ≥∑_v ∈ [n]μ_v - 2 μ_+ ε n and hence standard coupling can be constructed so that almost surely, Y(t) ≥∑_v ∈ [n] X_v(t) for all t ≥ 0, provided it is satisfied at t=0. Moreover, Y(t) is a simple birth-death process and its steady state satisfies ( Y(∞) = i ) = ρ^i (1 - ρ)/ρ^n+1 - ρ^k, i = k, k+1, …, n, where ρ := λ n ( ∑_v ∈ [n]μ_v - 2 μ_+ ε n )^-1≤λ/λ + μ_+ ε < 1. Let l := ⌊ (1 - ε) n ⌋. Then, ( ∑_v ∈ [n] X_v(∞) ≥ l ) ≤( Y(∞) ≥ l ) = ∑_i = l^n ρ^i (1 - ρ)/ρ^k - ρ^n+1 = ρ^l - ρ^n+1/ρ^k - ρ^n+1≤ρ^l - k≤( 1 - μ_+ ε/λ + μ_+ ε)^ε n. We know that [ (1 - X_v(∞)) I_v(∞) ] = [ 1/n∑_v' ∈ [n] (1 - X_v'(∞)) n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] = [ 1 - c/λλ n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] ±[ |1/n∑_v' ∈ [n] (1 - X_v'(∞)) - (1 - c) |n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))]. By Lemma <ref>, the first term on the right-hand side of Equation (<ref>) satisfies [ 1 - c/λλ n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] = [ (1 - c) (1 - X_v(t))/λ], Now let E_1 := {∑_v' ∈ [n](1 - X_v'(∞)) ≥δ n/3 μ_+} and E_2 := {|1/n∑_v' ∈ [n] X_v' - c |≤ε}. The second term on the right-hand side of Equation (<ref>) satisfies [ |1/n ∑_v' ∈ [n] (1 - X_v'(∞)) - (1 - c) |n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] = [ |1/n∑_v' ∈ [n] X_v'(∞)) - c |n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))( 1_E_1 + 1_E_1^c) ] ≤[ |1/n∑_v' ∈ [n] X_v'(∞) - c |n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞)) (1_E_2 + 1_E_2^c) 1_E_1] + √([ n^2 I_v(∞)^2 ] (E_1^c)) ≤[ ε/λλ n (1 - X_v(∞)) I_v(∞)/∑_v' ∈ [n] (1 - X_v'(∞))] + √([ I_v(∞)^2/(δ / (3 μ_+))^2] (E_2^c)) + √(2) n/λ( 1 - δ / 3/λ + δ / 3)^δ n / (6 μ_+) ≤ε/λ + 3 √(2)μ_+/δλ√(1 + √(8 (μ_+ + λ) log(n)/α)/n^μ_-/α) + √(2) n/λ( 1 - δ/3 λ + δ)^δ n / (6 μ_+), where the first inequality follows by Cauchy-Schwartz, the second inequality follows by Cauchy-Schwartz, the fact that I_v(∞) is stochastically dominated by an Exp(λ), and Lemma <ref>, and the third inequality follows by Lemmas <ref> and <ref>. Also, [ X_v(∞) I_v(∞) ] = [ (1 - X_v(∞))/μ_v] = [ 1/μ_v n∑_v' ∈ [n] (1 - X_v'(∞)) n (1 - X_v(∞))/∑_v' ∈ [n] (1 - X_v'(∞))] = [ 1 - c/λμ_vλ n (1 - X_v(∞))/∑_v' ∈ [n] (1 - X_v'(∞))] ±[ 1/μ_v|1/n∑_v' ∈ [n] (1 - X_v'(∞)) - (1 - c) |n (1 - X_v(∞))/∑_v' ∈ [n] (1 - X_v'(∞))], where the first equality follows by Lemma <ref>. The first term on the right-hand side of Equation (<ref>) satisfies [ 1 - c/λμ_vλ n (1 - X_v(∞))/∑_v' ∈ [n] (1 - X_v'(∞))] = [ (1 - c) X_v(∞)/λ], by Lemma <ref>. The second term on the right-hand side of Equation (<ref>) satisfies [ 1/μ_v|1/n∑_v' ∈ [n] (1 - X_v'(∞)) - (1 - c) |n (1 - X_v(∞))/∑_v' ∈ [n] (1 - X_v'(∞))] ≤ε/λ + 3 μ_+/δμ_-√(1 + √(8 (μ_+ + λ) log(n)/α)/n^μ_-/α) + n/μ_-( 1 - δ/3 λ + δ)^δ n / (6 μ_+), which follows along similar lines as Equation (<ref>). Therefore, the proof follows by adding equations (<ref>) and (<ref>) and the fact that ε≤δ / μ_-. We define y_1(t) := ∫_0^t ( λ - max_v ∈ [n]μ_v y_1(s) ) s, and y_2(t) := ∫_0^t ( λ - min_v ∈ [n]μ_v y_2(s) ) s, with y_1(0) = y_2(0) = ∫_0^∞ϕ̅_0( x). Now, we claim that y_1(t) ≤∫_0^∞ϕ̅_t( x) ≤ y_2(t). To see why, note that y_1(t), ∫_0^∞ϕ̅_t( x) and y_2(t) are differentiable as a function of t. If t is any time such that ∫_0^∞ϕ̅_t( x) = y_1(t), then / t∫_0^∞ϕ̅_t( x) = λ - ∫_0^∞ x ϕ̅_t( x) ≥λ - max_v ∈ [n]μ_v ∫_0^∞ϕ̅_t( x) = λ - max_v ∈ [n]μ_v y_1(t) = / t y_1(t), and, if t is any time such that ∫_0^∞ϕ̅_t( x) = y_2(t), / t∫_0^∞ϕ̅_t( x) = λ - ∫_0^∞ x ϕ̅_t( x) ≤λ - min_v ∈ [n]μ_v ∫_0^∞ϕ̅_t( x) = λ - max_v ∈ [n]μ_v y_2(t) = / t y_2(t), from which the claim follows. Moreover, Equation (<ref>) implies that / t| y_1(t) - λ/max_v ∈ [n]μ_v| = - ( max_v ∈ [n]μ_v ) |λ/max_v ∈ [n]μ_v - y_1(t) |, / t| y_2(t) - λ/min_v ∈ [n]μ_v| = - (min_v ∈ [n]μ_v ) |λ/min_v ∈ [n]μ_v - y_2(t) |, and hence | y_1(t) - λ/max_v ∈ [n]μ_v| = | y_1(0) - λ/max_v ∈ [n]μ_v| e^ -max_v ∈ [n]μ_v t, | y_2(t) - λ/min_v ∈ [n]μ_v| = | y_2(0) - λ/min_v ∈ [n]μ_v| e^-min_v ∈ [n]μ_v t, which completes the proof.
http://arxiv.org/abs/2306.10747v1
20230619073412
A dynamic combustion model for supersonic turbulent combustion
[ "Xu Zhu", "Jian An", "Nana Wang", "Jian Zhang", "Zhuyin Ren" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Institute for Aero Engine, Tsinghua University, Beijing, China [email protected] A DYNAMIC COMBUSTION MODEL FOR SUPERSONIC TURBULENT COMBUSTION Xu Zhu, Jian An*, Nana Wang, Jian Zhang, Zhuyin Ren July 31, 2023 ============================================================== Supersonic combustion plays a vital role in various applications, including scramjets, dual-mode ramjets, and pulse detonation engines. However, the flame characteristics can vary significantly, depending on the application. To model supersonic combustion, large eddy simulation coupled with a partially stirred reactor (PaSR) is commonly used. This method assumes that reactions occur at turbulent fine flame structure in a computational cell, making it less effective for flames governed by not only turbulent mixing but also homogenous autoignition, which is common under supersonic flow. To address this limitation, this study proposes a novel dynamic combustion model to enable more versatile modeling of supersonic flames. The model utilizes a two-delta probability density function, which represents the sub-grid composition variation and models the filtered sub-grid reaction rate. The corresponding weights are dynamically modeled based on the level of cell composition inhomogeneity. The new dynamic model is tested in a supersonic combustion case study with a strut-cavity flame holder. Results demonstrate that the new dynamic model can properly recover the limits of supersonic flames that are primarily governed by homogenous autoignition and turbulent mixing. Keywords: Supersonic combustion, Large eddy simulation, Partially stirred reactor, Dynamic combustion model § INTRODUCTION To achieve navigation within and beyond the atmosphere, hypersonic flight technology has garnered significant interest. At high altitudes and velocities, the scramjet engine can compress air through the ramming action alone, achieving a high pressure ratio without requiring rotating components. To investigate the underlying mechanisms, researchers have designed sophisticated experimental devices focusing on cavity-based combustors <cit.> and strut-based combustors <cit.>. However, these studies often yield only basic flame structure and wall static pressure data, making it challenging to accurately portray the spatio-temporal evolution of flow fields and thermochemical information within the combustion chamber. Consequently, numerical simulations of turbulent combustion have emerged as another crucial method for examining supersonic combustion phenomena. Large eddy simulation (LES) offers a promising approach for elucidating the complexities of turbulence. However, due to the short residence time in hypersonic combustion, the mixing time scale closely aligns with the chemical reaction time scale, resulting in a strong coupling effect between the two. This interdependence makes it challenging to accurately model the source term of the chemical reaction. Numerous methods have been proposed and applied for modeling turbulence-chemistry interaction in supersonic reacting flows, including well-stirred reactor (WSR) models <cit.>, partially stirred reactor (PaSR) models <cit.>, quasi-laminar (QL) reaction rate model <cit.>, the eddy dissipation concept (EDC) model <cit.>, flamelet progress variable models <cit.>, and transported probability density function (TPDF) models <cit.>. The PaSR combustion model, capable of accounting for finite-rate chemistry and the intricate turbulence-chemistry interaction, has demonstrated its effectiveness for supersonic combustion. It has been successfully applied in LES studies of ONERA <cit.>, DLR <cit.>, and HyShot II scramjet combustors <cit.>. As mentioned in <cit.>, the reacting structures in highly turbulent supersonic flames exhibit a correlation with the dissipative structures of the flow, which are typically smaller than the filter width Δ. Generally, these flames are highly wrinkled and fragmented, with the assumption that they consist of reacting fine structures, while the surroundings are dominated by large-scale structures. As a result, each LES cell can be regarded as a partially stirred reactor containing homogeneous fine structures, exchanging mass and energy with the surrounding environment. Turbulence combustion interaction in the well-stirred region adheres to the fundamental assumptions of the WSR model, while chemical reactions in the reverse structure region occur considerably slower due to molecular diffusion constraints. As a result, the filtered chemical reaction source term can be ignored during the solution process. However, given that the PaSR model's basic assumption relies on turbulence mixing intensity and quasi-static process approximation, it exhibits limitations in addressing ignition or self-ignition phenomena <cit.> with weak coupling relationships to turbulence action. Hypersonic combustion processes frequently involve ignition or self-ignition events induced by high temperatures and significant shear. Thus, it is essential to extend the PaSR model's capabilities to encompass supersonic combustion scenarios. This study tackles this challenge by developing a dynamic partially premixed combustion model, building upon the original PaSR method to close the chemical reaction source terms in complex and highly transient turbulent combustion for large eddy simulation. A numerical simulation is performed on a typical rocket based combined cycle (RBCC) model combustion chamber example, and the results of the dynamic method are compared with those obtained using the original PaSR method. § METHODOLOGY §.§ Large Eddy Simulation Theory For LES, the flow field value ϕ undergoes filtering through a low-pass filtering function G(x), as shown in the following equation: ϕ̅(x, t)=∫_V ϕ(x^', t) G(x-x^') d x^' Consequently, the filtered control equations are obtained ∂ρ̅/∂ t+∂ρ̅ũ_j/∂ x_j=0 ∂/∂ t(ρ̅ũ_i)+∂/∂ x_j(ρ̅ũ_i ũ_j)=∂p̅/∂ x_i+∂/∂ x_j(τ̅_i j+τ̅_i j^t) ∂/∂ t(ρ̅Ỹ_s)+∂/∂ x_j(ρ̅ũ_j Ỹ_s)=∂/∂ x_j(ρ̅ D ∂Ỹ_s/∂ x_j)-∂τ̅_s/∂ x_j+ω̅_s s=1, …, n s-1 ∂/∂ t(ρ̅Ẽ)+∂/∂ x_j(ρ̅ũ_j H)=∂/∂ x_j[k ∂T/∂ x_j+∑_s=1^n s(ρ̅ D h̃_s ∂Ỹ_s/∂ x_j)+ũ_i τ̅_i j+τ̅_e̅^t] where ρ̅ is mixture density, t is time, ũ_j is velocity component in the direction of j in Cartesian coordinate system, x_j is spatial coordinate in the direction of j, p̅ is filtering pressure, τ̅_ij and τ̅_ij^t for molecular and turbulent viscosity tensor respectively, Ỹ_̃s̃ for the first s mass fraction of a component, D for the diffusion coefficient, τ̅_s=ρ̅(u_jY_s-ũ_jỸ_̃s̃) for modeled scalar turbulence flow, ω̅_s is the filtered chemical reaction source term of component s, E is the total energy, H is the total enthalpy, T is the temperature, k is the thermal conductivity, n s is the number of components, h̃_s is the enthalpy of component. §.§ Derivation of turbulent combustion source term The probability density function of the scalar in the grid ϕ={Y_1, Y_2, … Y_n s-1, H} is ℘(ϕ;𝐱,t)=αδ(ϕ^m-ϕ)+βδ(ϕ^*-ϕ) where δ(x)={[ +∞, x=0; 0, x ≠ 0 ], ∫_-∞^+∞δ(x) d x=1. for a probibility distribution, ∫_-∞^+∞℘(ϕ;x,t)dϕ=1, which means α + β = 1, thus the averaged filtered property ϕ= {Ỹ_1, …, Ỹ_n s-1, H̃} can be represented as ϕ=1/ρ̅∫℘(ϕ ; x, t) ρϕ d ϕ=1/ρ̅(αρ^m ϕ^m+βρ^* ϕ^*) Assuming the density variation within a grid is minimal, ρ^m=ρ^*=ρ̅, ϕ(𝐱,t)=αϕ^m(𝐱,t)+βϕ^*(𝐱,t) Considering the turbulent mix time as τ_mix, we can derive the corresponding control equation of mass fraction: ρ̅DY^*Dt=-ρ̅(Y^*-Y̅)τ_mix+ω̇_s(ρ̅,ϕ^*) The source term of the chemical reaction, ω̇_s(ρ,ϕ), can be obtained by integrating the reaction rates [ ω̇_s(ρ,ϕ)=∫∫_ρ∫_ϕ℘(ϕ)ω̇_s(ρ,ϕ)dϕ dρ; ≈αω̇_s(ρ̅,ϕ^m)+βω̇_s(ρ̅,ϕ^*); ] In stable pulsating state regions, the mass and energy are in a local equilibrium state, meaning that the mass/energy fraction mixing and reaction rate undergo a quasi-static process. Consequently, we let the left-hand side of Eq. <ref> be equal to 0. ρ̅(Y^*-Y)τ_mix≈ω̇_s(ρ̅,ϕ^*) Considering the chemical reaction time as τ_c, we simplify chemical process using 1-order approximation, i.e. ω̇(ρ̅,ϕ^l)≈ω̇(ρ̅,ϕ)- 1/τ_c·ρ̅(ϕ^*-ϕ) Subsititute Eq.<ref> into Eq.<ref> ω̇(ρ̅,ϕ^*)≈ω̇(ρ̅,ϕ)-1τ_c·τ_mixω̇(ρ̅,ϕ^*) ω̇(ρ̅,ϕ^*)≈τ_c/τ_c+τ_mixω̇(ρ̅,ϕ̃) Subsititute Eq.<ref> into Eq.<ref>, and let τ_c/τ_c+τ_mix=κ ω̇_s(ρ,ϕ)≈αω̇_s(ρ̅,ϕ^m)+(1-α)κω̇_s(ρ̅,ϕ) where α represents the weight of the mixing state of scalars, which denotes the homogeneity of the reaction scalars in the subgrid. In this study, an attempt is made to dynamically model α values without introducing additional variables. The parameter α should exhibit the following properties: it should lie between 0 and 1, and have a positive correlation with premixed condition in the flow field. Thus, flame index (FI) is introduced to identify the premix condition of the flow field FI=∇̃ ̃Ỹ_Fuel·∇̃ ̃Ỹ_ox/|∇Ỹ_Fuel|×|∇Ỹ_ox| where ∇ denotes the Nabla operator, with ∇=∂/∂ x i+∂/∂ y j+∂/∂ z k, Ỹ_Fuel and Ỹ_ox represent the mass fractions of fuel and oxidant, respectively. When FI = 1, the local flow field exhibits a fully premixed state; when FI = -1, the local flow field is in a completely unpremixed state. After identifying the premixed and unpremixed states of the local flow field, we compute α using the evenness of the weighted mixing fraction (corresponding to unpremixed conditions) and the reaction process variables (corresponding to premixed conditions). Drawing inspiration from Kuron et al.'s <cit.> mixing time scales model, the final expression for α is modeled as follows: α=1-FI/2[1-Z^''2/Z̃(1-Z̃)]+1+FI/2[1-c^''2/c̃(1-c̃)] where Z is the mixing fraction, which represents the mixing of fuel and oxidizer in the local flow field, and c is the variation of reaction process, which represents the degree of reaction. Because ϕ^m≈ϕ,ω̇(ρ̅,ϕ^m)≈ω̇(ρ̅,ϕ), substitute into Eq. <ref>, the closed average chemical reaction source term can be written as ω̅_s(ρ,ϕ)≈αω̇_s(ρ̅,ϕ)+(1-α)κω̇_s(ρ̅,ϕ) The model comprises two alpha-weighted components: as α approaches zero, it converges to the PaSR model. When α approaches 1, the subgrid fluctuations of the components within the grid approach zero, which can be considered as a homogeneous reactor. In this case, the filtered reaction rate corresponds to the reaction rate obtained through analytical quantities. Another way to interpret this model is to rewrite Eq. <ref> as follows: ω̅_s(ρ,ϕ)≈α(1-κ)ω̇_s(ρ̅,ϕ)+ κω̇_s(ρ̅,ϕ) where the first term on the right-hand side of Eq. <ref> represents the homogeneous reactor term taking into account premixing, while the second term represents the original PaSR model term. Therefore, the essence of this model is to consider dynamically the chemical reaction source term under the premixing state on the basis of the PaSR model. The variances of the mixed fraction and the reaction process variables ϕ^''2 and c^''2 in Eq.<ref> need to be further modeled for numerical simulation. The model proposed by Pierce et al. <cit.> based on the sub-grid balance hypothesis is used in this study, which assumes that the local balance exists between the generation rate and dissipation of a sub-grid variance of a conserved quantity. For variable ϕ, its Favre variance ϕ^''2 is given by, ϕ”^2=CΔ^2(∇ϕ̃)^2 where Δ is the grid filter length of the LES, representing the sub-grid turbulence length scale, and C is the model constant. To avoid non-physical variances, ϕ^''2 is restricted by 0 ≤ϕ^'' 2≤ϕ̃(1-ϕ̃). The coefficients τ_c and τ_mix in the model parameter κ=τ_c/τ_c+τ_mix are obtained using analytical methods. τ_c adopts the defining method proposed by Golovitchev et al.<cit.> τ_c=∑_r=1^nrc_tot/∑_n=1^NS,RHSν_n,rk_f,r where c_tot is the total concentration of the mixture, r and nr are the index and total number of reactions, ν_n,r represents the stoichiometric coefficient of the n-th component in the i-th reaction, and k_f,r is the forward reaction rate coefficient. The value of τ_mix is calculated using τ_mix=C_mix√(μ_effρϵ) where μ_eff represents the sum of the turbulent viscosity coefficient and the molecular viscosity coefficient, while ε denotes the dissipation rate of turbulent kinetic energy, and C_mix serves as a model constant. § NUMERICAL SETUP To verify and evaluate the effectiveness of the new model, we implemented numerical simulation on a typical RBCC model combustor with a strut-cavity flameholder. Since it is difficult to organize the formation, ignition and combustion of the mixture under the condition of high speed incoming flow, the cross jet is usually adopted to enhance the oil-gas mixing, and the cavity or strut configuration is mostly adopted in the combustor to prolong the residence time of the gas, providing favorable conditions for ignition and stable combustion. The combustion chamber uses jet rocket for ignition, and uses both the strut and cavity to stabilize the flame, each side of the strut has 10 kerosene spray holes with a diameter of 0.5mm, and the fuel is ejected from the strut. The schematic diagram of RBCC is shown in Figure fig:diagram, and the inlet parameters for the isolator is shown in Table <ref>. In this study, the mass flow rate of the rocket is 0.12kg/s, the equivalent ratio is 1.06, and the combustion chamber pressure is 1.57Mpa. According to the calculation of chemical equilibrium thermodynamics, the exit temperature of the rocket is 1855K. The exit group is divided into H2 (mass fraction of 0.06) and CO (mass fraction of 0.94). And the kerosene flow from the strut is 0.2kg/s. For more detailed dimensions and boundary condition parameters of the configuration, please refer to <cit.>. In this study, large eddy simulation is used to simulate the configuration. The grids are full hexahedral, with a total of 12.0 million, as shown in Figure <ref>. The grids near fuel injection position, the strut region and the cavity are densed to capture the large-scale shear vortices in the flow field. A symmetric boundary condition is applied to the symmetry surface to save computational resource. The solver uses the sprayFOAM solver in OpenFOAM 2.3.1, which can be used to solve chemical reaction simulations with droplet sprays. The turbulence model is one-equation k model <cit.>. The combustion model adopts the original PaSR model and the dynamic model in this study. The adopted droplet breaking model is ReitzDiwakar model, and the droplet heat transfer model is RanzMarshall model. After the flow field is stabilized, an average of five throughflows is computed as the mean flow field. The calculation takes approximately four days on a 10-node cluster with 24 cores each, resulting in a total computation time of around 2.3 × 10^4 CPU hours. § RESULTS AND DISCUSSION Figure <ref> presents a comparison of simulation and test results regarding static wall pressure along streamwise direction. As illustrated in the figure, both results corresponding to the two combustion models effectively capture the pressure changes of the hypersonic fluid within the combustion chamber. The static pressure increases from the inlet section, mainly due to the high back pressure generated by combustion, which results in the formation of a series of shock waves in the isolator section. As the airflow approaches the strut, the pressure near the strut first decreases and then increases, attributable to the choking effect induced by the strut itself. Owing to the thermal blockage created by high-temperature gas at the cavity's end, the airflow velocity increases, causing the combustor's pressure to gradually decrease after passing through the cavity. The static pressure exhibits minimal differences between the two models, necessitating further analysis of the internal flow field for comparison. To investigate the turbulent combustion mode within the combustion chamber, we further analyze the flow field information in the strut and cavity regions. Figure <ref> displays the average (a) and the instantaneous (b) flame index flow field of the strut view, respectively. From the instantaneous flow field, the flame index distribution exhibits a complex fragmented state. The jet flow at the rocket nozzle outlet mainly involves premixed combustion, while strong non-premixed combustion is primarily observed near the strut. As the reaction mixing layer thickens and the mixing between the strut and cavity intensifies downstream, the combustion gradually demonstrates strong premixed characteristics. Consequently, adopting a non-premixed/partially premixed dynamic combustion model becomes essential for a combustion chamber with a strut. To analyze the effect of the proposed method in the premixed region, Figure <ref> displays the α distribution in the strut view, where (a) represents the average α flow field and (b) represents the instantaneous flow field at the end of the fifth through flow. From the figure, it is evident that the α values are primarily distributed at the tail of the jet flow at the rocket nozzle outlet and on both sides of the strut, predominantly governed by the flow of the wake. Although the flow field leans towards a strong premixed state at the tail of the combustion chamber, the α value remains small, as it is also associated with the process variables of the chemical reaction and their variance. In the central side of the tail of the combustion chamber, where combustion is approaching completion, the value of α is still not pronounced. The specific differences in heat release rate between the new model and the dynamic model are compared to analyze the contribution of premixed combustion to the overall heat release. Noting that Eq.<ref> shows that the dynamic model additionally considers the chemical reaction source term under the premixing state, Figure <ref> shows the heat release rate in the strut view, where (a) shows the heat release rate of the original PaSR model, and (b) shows the heat release rate result of the dynamic model. It can be seen that for this combustion chamber, there is chemical reaction throughout the entire flow channel, and the dominant combustion region is located behind the strut. The estimate of the heat release rate of the new model near the strut is higher than that of the original PaSR model, which is consistent with the analysis result of the α distribution in Figure <ref>. The maximum heat release rate in the wake region increased from 1.2e10W to 1.6e10W. Therefore, the premixed chemical reacting source term predicted in this case is quite significant compared with the original PaSR. Similar effort is made to examine the turbulent combustion mode in the cavity region and assess the improvements brought about by the new model on relevant parameters, reaffirming the need for dynamic modeling. Figure <ref> displays the flame index distribution in the cavity view, with (a) representing the average flame index field and (b) illustrating the instantaneous flame index field results. The instantaneous flow field reveals that the area between the strut and cavity maintains complex premixed and non-premixed structural characteristics. However, based on the averaged flow field, the region where the strut is situated primarily demonstrates strong non-premixed features, while the region in the cavity predominantly exhibits strong premixed traits. As the chemical reaction layer thickens downstream, the downstream region increasingly showcases prominent premixed characteristics. While the premixed characteristics at the cavity are more pronounced than those at the strut, the α distribution in the cavity is not as prominent as that at the strut, as illustrated in Figure <ref>. This is because, despite the larger flame index in this region, the reaction has already been much more complete at the strut. Consequently, the airflow entering the cavity exhibits a small variance in the chemical reaction process variable, resulting in a smaller α value, and the chemical reaction source term generated by premixing is not significant. § CONCLUSIONS In this study, we develop a partially premixed dynamic turbulence combustion model under the large eddy simulation framework and implement the model in the OpenFOAM code. The model is validated through simulations of a typical RBCC engine configuration, and the simulation results are compared with experimental data for static pressure, verifying the reliability of the simulation. Through analysis of the model, we obtain the following main conclusions: * The RBCC engine model operates under a complex premixed and non-premixed combustion mode, which can be divided into three combustion zones. The first part is the reaction mixing layer formed by the high-temperature rocket exhaust jet and incoming air, which generates a large amount of heat and is characterized by premixed combustion. The second part is the violent mixing combustion of the fuel injection from the fuel strut and the reaction mixing layer formed by the rocket exhaust jet and incoming air, which releases a large amount of heat from the kerosene in a short distance, characterized by non-premixed combustion. The third part is the cavity region and a small amount of kerosene continuously burning and releasing a small amount of heat in the airflow downstream, playing an auxiliary role in stabilizing the flame, and characterized by premixed combustion. * The new combustion model can dynamicly capture the premixed combustion situation. The model returns to the PaSR model under completely non-premixed conditions, and returns to the homogeneous reactor model under completely premixed homogenous conditions. For the RBCC engine simulation example, the new combustion model mainly captures the premixed combustion situation in the strut and wake regions. The prediction of the maximum heat release rate in the wake region is 30% higher than that of the original PaSR model. unsrt
http://arxiv.org/abs/2306.02464v1
20230604204342
Graviton Mass Bounds in Very Special Relativity from Binary Pulsar's Gravitational Waves
[ "Alessandro Santoni", "Jorge Alfaro", "Alex Soto" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2306.02418v1
20230604175020
ContraBAR: Contrastive Bayes-Adaptive Deep RL
[ "Era Choshen", "Aviv Tamar" ]
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
[ ContraBAR: Contrastive Bayes-Adaptive Deep RL Era Choshentechnion Aviv Tamartechnion technionTechnion, Haifa, Israel Era [email protected] 0.3in ] In meta reinforcement learning (meta RL), an agent seeks a Bayes-optimal policy – the optimal policy when facing an unknown task that is sampled from some known task distribution. Previous approaches tackled this problem by inferring a belief over task parameters, using variational inference methods. Motivated by recent successes of contrastive learning approaches in RL, such as contrastive predictive coding (CPC), we investigate whether contrastive methods can be used for learning Bayes-optimal behavior. We begin by proving that representations learned by CPC are indeed sufficient for Bayes optimality. Based on this observation, we propose a simple meta RL algorithm that uses CPC in lieu of variational belief inference. Our method, ContraBAR, achieves comparable performance to state-of-the-art in domains with state-based observation and circumvents the computational toll of future observation reconstruction, enabling learning in domains with image-based observations. It can also be combined with image augmentations for domain randomization and used seamlessly in both online and offline meta RL settings. § INTRODUCTION In meta reinforcement learning (meta RL), an agent learns from a set of training tasks how to quickly solve a new task, sampled from a similar distribution as the training set <cit.>. A formal setting for meta RL is based on the Bayesian RL formulation, where a task corresponds to a particular Markov decision process (MDP), and there exists some prior distribution over MDPs <cit.>. Under this setting, the optimal meta RL policy is well defined, and is often referred to as a Bayes-optimal policy <cit.>. In contrast to the single MDP setting, where an optimal policy can be Markovian – taking as input the current state and outputting the next action, the Bayes-optimal policy must take as input the whole history of past states, actions, and rewards, or some sufficient statistic of it <cit.>. A popular sufficient statistic is the belief – the posterior probability of the MDP parameters given the observed history. For small MDPs, the belief may be inferred by directly applying Bayes rule, and approximate dynamic programming can be used to calculate an approximately Bayes-optimal policy <cit.>. However, this approach quickly becomes intractable for large or continuous MDPs. Recently, several studies proposed to scale up belief inference using deep learning, where the key idea is to leverage a variational autoencoder (VAE, ) formulation of the problem, in which the posterior is approximated using a recurrent neural network <cit.>. While this approach has demonstrated impressive results on continuous control benchmarks <cit.>, it also has some limitations. Training a VAE is based on a reconstruction loss, in this case, predicting the future observations given the current history, which can be difficult to optimize for visually rich observations such as images. Furthermore, variational algorithms such as VariBAD <cit.> reconstruct entire trajectories, restricting application to image-based domains due to memory limitations. As an alternative to VAEs, contrastive learning has shown remarkable success in learning representations for various domains, including image recognition and speech processing <cit.>. Rather than using a reconstruction loss, these approaches learn features that discriminate between similar observations and dissimilar ones, using a contrastive loss such as the InfoNCE in contrastive predictive coding (CPC, ). Indeed, several recent studies showed that contrastive learning can learn useful representations for image based RL <cit.>, outperforming representations learned using VAEs. Furthermore, <cit.> showed empirically that in partially observed MDPs, representations learned using CPC <cit.> are correlated with the belief. In this work, we further investigate contrastive learning for meta RL, henceforth termed CL meta RL, and aim to establish it as a principled and advantageous alternative to the variational approach. Our first contribution is a proof that, given certain assumptions on data collection and the optimization process of CPC, representations learned using a variant of CPC are indeed a sufficient statistic for control, and therefore suffice as input for a Bayes-optimal policy. Our second contribution is a bound on the suboptimality of a policy that uses an approximate sufficient statistic, learned by CPC, in an iterative policy improvement scheme where policies between iterations are constrained to be similar. This result relaxes the assumptions on the optimization and data collection in the first proof. Building on this result, we propose a simple meta RL algorithm that uses a CPC based representation to learn a sufficient statistic. Our third contribution is an empirical evaluation of our method that exposes several advantages of the contrastive learning approach. In particular, we show that: [label=(*)] * For state-based observations, CL meta RL is on par with the state-of-the-art VariBAD <cit.> * For image-based observations, CL meta RL significantly outperforms the variational approach, and is competitive with RNN based methods <cit.> * In contrast to the variational approach, CL Meta RL is compatible with image augmentations and domain randomization. * Our method works well in the online and offline meta RL setting. Overall, our results establish CL meta RL as a versatile and competitive approach to meta RL. § BACKGROUND AND PROBLEM FORMULATION In this section we present our problem formulation and relevant background material. §.§ Meta RL and POMDPs We define a Markov Decision Process (MDP) <cit.> as a tuple ℳ=(𝒮,𝒜,𝒫, ), where 𝒮 is the state space, 𝒜 is the action space, 𝒫 is the transition kernel and is the reward function. In meta RL, we assume a distribution over tasks, where each task is an MDP ℳ_i=(𝒮,𝒜,𝒫_i,_i), where the state and action spaces are shared across tasks, and 𝒫_i,_i are task specific and drawn from a task distribution, which we denote 𝒟(𝒫,). At a given time t, we denote by (s_0, a_0,r_0, s_1,a_1,r_1,…,s_t)=h_t∈_t the current history, where _t is the space of all state-action-reward histories until time t. Our aim in meta RL is to find a policy π = {π_0, π_1,…}, where π_t:_t →𝒜, which maximizes the following objective: 𝔼_π[∑_t=0^∞γ^t r_t], where the expectation 𝔼_π is taken over the transitions s_t+1∼𝒫(·|s_t,a_t), the reward r_t = (s_t, a_t), the actions a_t ∼π(·|h_t) and the uncertainty over the MDP parameters 𝒫,∼𝒟(𝒫,). We assume a bounded reward r_t ∈ [-R_max, R_max], R_max > 0 with probability one. Meta RL is a special case of the more general Partially Observed Markov-Decision Process (POMDP), which is an extension of MDPs to partially observed states. In the POMDP for meta RL, the unobserved variables are 𝒫,, and they do not change over time. We define _t for POMDPs as above, except that states are replaced by observations according to the distribution o_t+1∼ U(o_t+1|s_t+1,a_t). As shown in <cit.>, the optimal policy for a POMDP can be calculated using backwards dynamic programming for every possible h_t ∈_t. However, as explained in <cit.> this method is computationally intractable in most cases as _t grows exponentially with t. §.§ Information States and BAMDPs Instead of the intractable space of histories, sufficient statistics can succinctly summarize all the necessary information for optimal control. One popular sufficient statistic is the posterior state distribution or belief P(s_t|h_t). Conditions for a function to be a sufficient statistic, also termed information state, were presented by <cit.> and are reiterated here for completeness: Let {_t}_t=1^T be a pre-specified collection of Banach spaces. A collection {: _t →_t }_t=1^T of history compression functions is called an information generator if the process {_t }_t=1^T satisfies the following properties, where h_t ∈_t, and (h_t)=_t ∈𝒵_t: P1 For any time t and for any h_t ∈_t, a_t ∈ we have: 𝔼[r_t | h_t, a_t] = 𝔼[r_t |_t=(h_t), a_t ]. P2 For any time t, and for any h_t ∈_t, a_t ∈, and any Borel subset B of Z_t+1 we have: P( B ∈_t+1 | h_t,a_t ) = P ( B ∈_t+1 | _t=(h_t),a_t ). Intuitively, information states compress the history without losing predictive power about the next reward, or the next information state. To solve a POMDP, one can define a Bayes-Adaptive MDP (BAMDP)– an MDP over the augmented state space of 𝒮×ℬ, where ℬ={𝒵_t }_t=1^T is the space of the information state. This idea was introduced by <cit.> for the belief. Here, we use the term BAMDP more generally, referring to any information state. The optimal policies for BAMDPs are termed Bayes-optimal and optimally trade-off between exploration and exploitation, which is essential for maximizing online return during learning. Unfortunately, in most cases computing the Bayes-optimal policy is intractable because the augmented space is continuous and high-dimensional. <cit.> proposed to approximate the Bayes-optimal policy by using deep neural networks to learn an information state (belief), and conditioning an RL agent on the learned augmented space; here we follow this approach. § RELATED WORK Our focus in this work is learning a Bayes-optimal policy for meta RL. We recapitulate the current approaches to meta RL with a focus on approaches that potentially yield Bayes-optimal policies. The methods in <cit.> learn neural network policies that can quickly be fine-tuned to new tasks at test time via gradient updates. These methods do not optimize for Bayes-optimal behavior, and typically exhibit significantly suboptimal test-time adaptation. A different approach is to learn an agent that directly infers the task at test time, and conditions the policy based on the inferred task. Typically, past interactions of the agent with the environment are aggregated to a latent representation of the task. <cit.> follow a posterior-sampling approach, which is not Bayes-optimal <cit.>; in this work we focus on methods that can achieve Bayes-optimality. <cit.> propose memory-based approaches, which <cit.> proves to approximate Bayes-optimal agents. <cit.> also approximate Bayes-optimal agents with a history-based representation, using a variational approach. <cit.> learn an approximately Bayes-optimal agent, where privileged information – a task descriptor – is used to learn a sufficient statistic. We explore an alternative approach that lies at the intersection of meta RL and contrastive learning. Different from memory-based methods such as RL^2 <cit.>, and similarly to VariBAD, we learn a history based embedding separately from the policy. However, unlike variational methods, we learn the task representation using contrastive learning. Contrastive learning has been used to learn representations for input to a meta RL policy. FOCAL <cit.> uses distance metric learning to learn a deterministic encoder of transition tuples to perform offline RL. They operate under the relatively restrictive assumption that each transition tuple (s, a, s', r) is uniquely identified by a task. The authors followed up with FOCAL++, in which batches of transition tuples (not necessarily from the same trajectory) are encoded to a representation that is optimized with MoCo <cit.>, a variant of CPC, alongside an intra-task attention mechanism meant to robustify task inference <cit.>. The MBML method in <cit.> proposes an offline meta RL method that uses the triplet loss to learn embeddings of batches of transition tuples from the same task, with the same probabilistic and permutation-invariant architecture of <cit.>. <cit.> propose embedding windows of transition tuples as probabilistic latent variables, where the windows are cropped from different trajectories. The embeddings are learned with MoCO <cit.> by contrasting them in probabilistic metric space, where positive pairs are transition windows that come from the same batch. The algorithm is presented as a general method to learn representations for context-based meta RL algorithms, but in practice all results are shown with PEARL <cit.>. In a similar line of work, <cit.> encode batches of transitions as a product of Gaussian factors and contrast the embeddings with MoCO <cit.>, with positive pairs being embedded transition batches from the same task, as opposed to the same trajectory as in <cit.>. As in <cit.>, results are shown with a posterior sampling meta RL algorithm. While we also investigate contrastive learning for meta RL, we make an important distinction: all of the works above embed transition tuples and not histories, and therefore cannot represent information states, and cannot obtain Bayes-optimal behavior. In contrast, in our work, we draw inspiration from <cit.>, who used a glass-box approach to empirically show that contrastive learning can be used to learn the belief in a POMDP. We cast this idea in the Bayesian-RL formalism, and show both theoretically and empirically, that contrastive learning can be used to learn Bayes-optimal meta RL policies. § METHOD In this section we show how to use contrastive learning to learn an information state representation of the history, and use it as input to an RL agent. We give a brief description of CPC <cit.> followed by our meta RL algorithm. We then prove that our method does indeed learn an information state. §.§ Contrastive Predictive Coding CPC <cit.> is a contrastive learning method that uses noise contrastive estimation <cit.> to discriminate between positive future observations o_t+k^+, where t is the current time step, and negative observations o_t+k^-. First, an encoder g generates an embedding for each observation in a sequence of observations from a trajectory τ until time t, {z_i=g(o_i)}_i=1^t. Second, an autoregressive model g_AR summarizes z_≤ t, the past t observations in latent space, and outputs a latent c_t. The model is trained to discriminate between future observations o_t+k^+ and K negative observations {o_t+k^-,i}_i=1^K given c_t. Given a set X={o_t+k^+,o_t+k^-,1,…,o_t+k^-,K} containing one positive future observation sampled according to P(o_t+k^+|c_t) and K negative observations sampled from a proposal distribution P(o_t+k^-), the InfoNCE loss is: ℒ_InfoNCE= -𝔼_X[logexp(f(c_t,o_t+k^+))/exp(f(c_t,o_t+k^+)) + ∑_i=1^Kexp(f(c_t,o_t+k^-,i))], where f is a learnable function that outputs a similarity score. The model components f, g, g_AR are learned by optimizing the loss ℒ_InfoNCE. §.§ ContraBAR Algorithm We will now introduce our CPC based meta RL algorithm, which is depicted in <ref>, and explain how CPC is used to learn a latent representation of the history. We begin by noting that we use the term observation throughout the text, in line with <cit.>, however in our case the meaning is state, reward and action o_t = {s_t,r_t-1,a_t-1} when talking about “observation history", and state and reward o_t+k={s_t+k,r_t+k-1} when talking about “future observations". We would like to learn an embedding of the observation history, c_t, that will contain relevant information for decision making. The CPC formulation seems like a natural algorithm to do this – for a given trajectory τ of length T collected from some unknown MDP ℳ, we use an embedding of its observation history until time t < T, c_t, to learn to discriminate between future observations from the trajectory τ and random observations from other trajectories τ_j≠τ. This means that c_t encodes relevant information for predicting the future system states, and consequently information regarding the MDP ℳ from which τ was collected. The CPC formulation described above is based on predicting future states in an uncontrolled system without rewards. We now modify it to learn a sufficient statistic for meta RL. We assume that data is collected at each training iteration m by some data collection policy π_m and added to a replay buffer 𝒟={τ_i}_i=1^N containing trajectories from previous data collection policies {π_1,…,π_m-1}; we note that length of the trajectories may vary. At each learning iteration, a batch of M trajectories is sampled, and for each trajectory and time t the negative observations are sampled from the remaining M-1 trajectories in the batch. As in CPC, we define c_t to be a function of the observation history until time t, but we add to f as input the future k-1 actions, as in a controlled system the future observation o_t+k=(s_t+k,r_t+k-1) depends on the controls. Our f can therefore now be written as f(c_t, o_t+k,a_t:t+k-1). We implement this modification as in <cit.> by means of an additional autoregressive component, a GRU g_action that receives actions as input and takes c_t as its initial hidden state. Given the adjustments described above, each batch B used as input to our algorithm contains the following: [label=(*)] * The observation history until time t in some trajectory τ * Future observations from time t+k * Observations o_t+k^- from the remaining M-1 trajectories sampled from 𝒟 . We rewrite the InfoNCE loss for the meta RL setting explicitly; For ease of notation we mark f(c_t, o_t+k^+,a_t:t+k-1) as f^+ and f(c_t, o_t+k^-,i,a_t:t+k-1) as f^-,i: ℒ_M= 𝔼_B[logexp (f^+)/exp(f^+) + ∑_i=1^Kexp(f^-,i)], where the expectation is over the batches of positive and negative observations sampled from 𝒟, as described above. §.§ Learning Information States with CPC We now show that integrating contrastive learning with meta RL is a fundamentally sound idea. We shall prove that our algorithm presented in <ref> learns a representation of the history that is an information state, by showing that the latent encoding satisfies the properties of an information state P1, P2 as defined by <cit.> and reiterated above in <ref>. We first define the notion of a “possible” history, which we use in Assumption <ref>. Let P_M denote a probability distribution over MDPs and let P_m,π(h_t) be the probability of observing history h_t under policy π in MDP m. We say that h_t is a possible history if there exists a policy π and an MDP m such that P_M(m) > 0 and P_m,π(h_t) > 0. Next, we make the following assumption, which states that the policy collecting the data covers the state, reward and action space. Let the length of the longest possible history be T. Let h_t, where t ≤ T, be a history and let P_𝒟(h_t) denote the probability of observing a history in the data 𝒟. If h_t is a possible history, then P_𝒟(h_t)>0. Assumption <ref> is necessary to claim that the learned CPC representation is a sufficient statistic for every possible history. In Section <ref> we discuss a relaxation of this assumption, using approximate information states. Let Assumption <ref> hold. Let g^*,g^*_AR,f^* jointly minimize ℒ_M(g,g_AR,f). Then the context latent representation c_t=g^*_AR(z_≤ t) satisfies conditions P1, P2 and is therefore an information state. The full proof is provided in <ref>; we next provide a sketch. The main challenge in our proof lies in proving the following equality: P(s_t+1,r_t|h_t,a_t) = P(s_t+1,r_t|c_t,a_t). Given the equality in <ref>, proving P1,P2 is relatively straightforward. We prove <ref> by expanding the proof in <cit.>, which shows that the InfoNCE loss upper bounds the negative mutual information between o_t+k^+ and c_t (in the CPC setting). In our case, we show that ℒ_M≥log(M-1) - I(s_t+1,r_t;c_t|a_t), where I(·; ·) denotes mutual information. Thus, by minimizing the loss in <ref>, we maximize the mutual information I(s_t+1,r_t;c_t|a_t). Due to the Markov property of the process, the mutual information in (<ref>) cannot be greater than I(s_t+1,r_t;h_t|a_t), which leads to Equation <ref>. §.§ Learning Approximate Information States with CPC We next investigate a more practical setting, where there may be errors in the CPC learning, and the data does not necessarily satisfy Assumption <ref>. We aim to relate the CPC error to a bound on the suboptimality of the resulting policy. In this section, we consider an iterative policy improvement algorithm with a similarity constraint on consecutive policies, similar to the PPO algorithm we use in practice <cit.>. We shall bound the suboptimality of policy improvement, when data for training CPC is collected using the previous policy, denoted π_k. In light of Eq. <ref>, we assume the following error due to an imperfect CPC representation: There exists an ϵ such that for every t≤ T, I(s_t+1,r_t;c_t|a_t) ≥ I(s_t+1,r_t;h_t|a_t) - ϵ, where the histories are distributed according to policy π_k. The next theorem provides our main result. Let Assumption <ref> hold for some representation c_t. Consider the distance function between two distributions D(P_1(x),P_2(x)) = max_x | P_1(x)/P_2(x) |. We let r̂(c_t,a_t)=𝔼[r_t|c_t,a_t] and P̂(c'|c_t,a_t)=𝔼[1(c_t+1=c')|c_t,a_t] denote an approximate reward and transition kernel, respectively. Define the value functions Q̂_t(c_t, a_t) = r̂(c_t, a_t) + ∑_c_t+1P̂(c_t+1|c_t,a_t) V̂_t+1(c_t+1) V̂_t(c_t) = max_π: D(π(c_t), π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a), 2.2em for t≤ T, and V̂_T(c_T) = 0, and the approximate optimal policy π̂(c_t)∈_π: D(π, π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a). Let the optimal policy π^*(h_t) be defined similarly, but with h_t replacing c_t in (<ref>) and (<ref>). Then we have that 𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ ϵ^1/3 R_max T^2 (√(2) + 4β^T). The dynamic programming recurrence in Equation (<ref>) defines the optimal policy that is conditioned on c_t (and not h_t), and is restricted to be β-similar to the previous policy π_k. The theorem bounds the loss in performance of such a policy compared to a policy that is conditioned on the full history (yet still restricted to be β-similar to π_k). The proof of Theorem <ref> builds on the idea of an approximate information state <cit.> and is detailed in Appendix <ref>. §.§ ContraBAR Architecture We now describe several design choices in our ContraBAR implementation. History Embedding We now describe the specific architecture used to implement our algorithm, also depicted in <ref>. We use a non-linear encoder to embed a history of actions, rewards and states and run it through a GRU to generate the hidden state for the current time-step c_t. The latent c_t is then used to initialize the action-gru g_action, which is fed future actions as input – the resulting hidden state is then concatenated with either a positive observation-reward pair, or a negative one and used as input to a projection head that outputs a score used in <ref>. We note that given a random sampling of negative observations, the probability of sampling a positive and negative observation that share the same state is low. Consequently, for environments where s_t+k can be estimated via s_t,a_t,…, a_t+k without h_t, c_t need only encode information regarding s_t to allow the action-gru to learn to distinguish between positive and negative observations. This renders c_t uninformative about the reward and transition functions and thus unhelpful for optimal control. An example of this is a set of deterministic environments that differ only in reward functions. The action-gru can learn to predict s_t+k via s_t,a_t,…, a_t+k, only requiring c_t to encode information regarding s_t and not the reward function. One way to circumvent this is hard negative mining, i.e using negative samples that are difficult to distinguish from the positive ones. Another solution, relevant for the case of varying reward functions, is to generate a negative observation by taking the state and action from the positive observation and recalculating the reward with a reward function sampled from the prior. In practice, we found that a simple alternative is to omit the action-gru. This prevents the easy estimation of s_t+k and requires c_t to encode information regarding the reward and transition function. We found this worked well in practice for the environments we ran experiments on, including those with varying transitions. We expand on these considerations in Appendix <ref>. RL Policy The history embedding portion of the algorithm described above is learned separately from the policy and can be done online or offline. The policy, which can be trained with an RL algorithm of the user's choice, is now conditioned on the current state s_t as well as c_t – the learned embedding of h_t. We chose to use PPO <cit.> for the online experiments and SAC <cit.> for the offline experiment – in line with VariBAD and BOReL <cit.>. § EXPERIMENTS In our experiments, we shall demonstrate that [label=(*)] * ContraBAR learns approximately Bayes-optimal policies * ContraBAR is on par with SOTA for environments with state inputs * ContraBAR scales to image-based environments * Augmentations can be naturally incorporated into ContraBAR and * ContraBAR can work in the offline setting . We compare ContraBAR to state-of-the-art approximately Bayes-optimal meta RL methods. In the online setting, we compare against VariBAD <cit.>, RL^2 <cit.>, and the recent modification of RL^2 by <cit.> which we refer to as RMF (recurrent model-free). In the offline setting, we compare with BOReL <cit.>. <cit.> and <cit.> already outperform posterior sampling based methods such as PEARL <cit.>, therefore we do not include such methods in our comparison. Finally, we note that using VariBAD <cit.> with image-based inputs is currently computationally infeasible due to memory constraints, and as such we did not use it as a baseline – we explain this issue further in <ref>. Other variational approaches, which require a reconstruction of the future observations, are subject to similar memory constraints. Instead, we compared our algorithm against RL^2 <cit.>, which works with images. We evaluate performance similarly to <cit.>, by evaluating per episode return for 5 consecutive episodes with the exception of the offline setting where we adapted our evaluation to that of BOReL. §.§ Qualitative Near Bayes-Optimal Behavior We begin with a qualitative demonstration that ContraBAR can learn near Bayes-optimal policies. As calculating the exact Bayes-optimal policy is mostly intractable, we adopt the approach of <cit.>: for deterministic domains with a single sparse reward, the Bayes-optimal solution is essentially to search all possible reward locations so as to maximally reduce uncertainty, and then go directly to the goal in subsequent episodes. Thus, we can identify whether a policy is approximately Bayes-optimal by inspecting its trajectory. Figure <ref> displays rollouts from a trained policy in the Gridworld and Semi-Circle domains, demonstrating near Bayes-optimal behavior similar to VariBAD <cit.>. §.§ Results for Problems with State Observations We compare ContraBAR with VariBAD and RMF, the current state-of-the-art on MuJoCo locomotion tasks <cit.>, commonly used in meta RL literature. We use the environments considered in <cit.>, namely the Ant-Dir, AntGoal, HalfCheetahDir, HalfCheetahVel, Humanoid and Walker environments. <ref> shows competitive performance with the current SOTA on all domains. Note that rewards in these environments are dense, so in principle, the agent only needs a few exploratory actions to infer the task by observing the rewards it receives. Indeed, we see that ContraBAR is able to quickly adapt within the first episode, with similar performance in subsequent episodes. §.§ Scaling Belief to Image-Based Inputs We show that ContraBAR can scale to image domains, which are computationally expensive, by running our algorithm on three image-based domains with varying levels of difficulty and sources of uncertainty: [label=(*)] * Reacher-Image – a two-link robot reaching an unseen target located somewhere on the diagonal of a rectangle, with sparse rewards * Panda Reacher – a Franka Panda robot tasked with placing the end effector at a goal on a 2d semi-circle, where the vertical position of the goal (z coordinate) is fixed; adapted from the Reacher task in Panda Gym <cit.> * Panda Wind – The same environment as Panda Reacher, except that the transitions are perturbed with Gaussian noise sampled separately for each task. For a more detailed description of each environment see <ref>. Image-Based Reward: For our image-based experiments, we found that learning in image-based domains with sparse reward was difficult when the reward was embedded separately (as in the state observation domains), and concatenated with the image embedding. We hypothesized that this might be an issue of differing scales between the scalar rewards and image inputs, but we observed that standard normalization techniques such as layer norm <cit.> did not help. Instead, we opted for a different approach that embeds the reward as an explicit part of the image. To implement this idea, we exploited the fact that in all our domains, the reward is sparse and binary, and we add a colored strip to a fixed place in the image when non-zero reward is received. Extending this idea to non-binary reward is possible, for example, by controlling the color of the strip. Our results are displayed in <ref>. For the Reacher environment, ContraBAR is slightly outperformed by RL^2, whereas in Panda Reacher and Panda Reacher Wind ContraBAR outperform RL^2 by a large margin. Notice that in contrast to the dense reward domains of Section <ref>, in these sparse reward tasks the agent gains by exploring for the goal in the first iteration. Evidently, the plots show significantly higher reward in the second episode onward. Glass-box Approach To further validate that our algorithm learns a sound belief representation, we follow a glass-box approach similar to that of <cit.>. First, we used ContraBAR to learn an information state for the Panda Reacher environment. Second, we use the trained agent to create a dataset of trajectories, including the agent's belief at each time step of every trajectory. We then trained an MLP-based binary classifier, which takes (x,y) and the information state c_t as input and predicts whether the goal in the trajectory is indeed (x,y). In <ref> we see the visualization of the classifier's prediction at different points along the trajectory; We see that the predictions coincide with the belief we expect the agent to hold at each step, thus validating the soundness of our belief representation. §.§ ContraBAR with Domain Randomization Despite the high fidelity of modern simulators, when deployed in the real-world, image-based algorithms learned in simulation can only be accurate up to the differences between simulation and reality – the sim-to-real gap. This motivates us to learn a belief representation that is robust to such differences, and in the following we will show that our algorithm can indeed learn such an information state. Robustification to irrelevant visual properties via random modifications is termed domain randomization <cit.>. We employ domain randomization in a similar fashion to <cit.> wherein we modify the past and future observations (without the rewards) in the trajectories with a mapping 𝒯: 𝒮→𝒮 that randomly shifts the RGB channels of the images. These modified trajectories are used to learn the history embedding c_t, with the hope that it will be invariant to different color schemes in the environment. We show the strength of such modifications by training two agents with ContraBAR on the Panda Reacher environment – one receives images modified by 𝒯 and the other does not. We then evaluate each agent's performance on different color schemes, which are kept static for evaluation. The results as well as the environments can be seen in <ref>. Note that while the belief may be robustified separately with augmentations, the policy must be robust to such changes as well. To do so, we used the data-regularized actor-critic method from <cit.> where the policy π_θ and value function V_ϕ are regularized via two additional loss terms, G_π =KL [π_θ(a |s) |π_θ(a | T(s)) ], G_V =(V_ϕ(s)-V_ϕ(T(s)))^2, where T: 𝒮→𝒮 randomly modifies the image. We emphasize that domain randomization, as applied here, is not naturally compatible with variational belief inference methods. The reason is that when the loss targets reconstruction of the modified observation, the learned embedding cannot be trained to be invariant to the modification 𝒯. §.§ Offline ContraBAR We show that as in VariBAD <cit.>, the disentanglement of belief and control allows us to reframe the algorithm within the context of offline meta RL, as was done in <cit.>. First, we use ContraBAR to learn a history embedding c_t from an offline dataset. Note that no specific change is required to our algorithm – we simply treat the offline dataset as the replay buffer for ContraBAR. Second, we perform state relabeling as described in <cit.>: for each trajectory τ_i of length T, i.e (s_0^i,a_o^i,r_0^i,…,s_T^i), we embed each partial t-length history h_t as c_t, and transform each s_t^i to s_t^+,i=(s_t^i,c_t^i) as in the BAMDP formulation. We then learn a policy with SAC <cit.> on the transformed dataset. We show competitive results with BOReL <cit.> in <ref>. Unfortunately we were not able to find an offline adaptation of RMF to use as an additional baseline. § CONCLUSIONS We proved that ContraBAR learns a representation that is a sufficient static of the history. Following on this, we presented what is to the best of our knowledge the first approximately Bayes-optimal CL meta RL algorithm. We demonstrated results competitive with previous approaches on several challenging state-input domains. Furthermore, by using contrastive learning we were able to scale meta-RL to image-based domains; We displayed results on par with RL^2 which was also able to scale to image inputs. Finally, we showed that our method is naturally amenable to domain randomization, which may be important for applications such as robotics. § ACKNOWLEDGEMENTS We thank Tom Jurgenson, Ev Zisselman, Orr Krupnik and Gal Avineri for useful discussions and feedback, and Luisa Zintgraf for invaluable help with reproducing the graphs from the VariBAD paper. This work received funding from the European Union (ERC, Bayes-RL, Project Number 101041250). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. icml2023 § THEOREM PROOFS <ref> Let Assumption <ref> hold. Let g^*,g^*_AR,f^* jointly minimize ℒ_M(g,g_AR,f). Then the context latent representation c_t=g^*_AR(z_≤ t) satisfies conditions P1, P2 and is therefore an information state. theorem-1 We begin our proof by presenting the causal model for the variant of CPC used by ContraBAR, shown in <ref>. From the causal model, we can infer that P(c_t+1|c_t,s_t+1,r_t,a_t) = P(c_t+1|c_t,s_t+1,r_t,a_t, h_t) and P(s_t+1,r_t|h_t, a_t) = P(s_t+1,r_t|h_t, a_t, c_t). We shall also assume that c_t is a deterministic function of h_t, and therefore P(c_t+1|c_t,s_t+1,r_t,a_t, h_t) = P(c_t+1|s_t+1,r_t,a_t, h_t), and from the above, we have P(c_t+1|s_t+1,r_t, a_t, h_t) = P(c_t+1|s_t+1,r_t,a_t, c_t). We now prove a mutual information bound similar to that of <cit.>, we show that by optimizing the meta RL InfoNCE loss defined in <ref> we maximize the mutual information between c_t and s_t+1,r_t given a_t. We begin with a lemma similar to that of Section 2.3 in <cit.>: Let c_t be a function of h_t, i.e c_t=σ_t(h_t). s_t+1,r_t,c_t,a_t is a possible sufficient statistic transition if h_t, a_t, r_t, s_t+1 is a possible history as in Definition <ref> and c_t=σ_t(h_t). Let Assumption <ref> and the loss in <ref> be jointly minimized by f, g, g_AR, then for any possible sufficient statistic transition s_t+1,r_t,c_t,a_t as in Definition <ref>, where c_t=g_AR(h_t), we have that f(s_t+1,r_t,c_t,a_t) ∝P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t) . The loss in Eq. <ref> is the categorical cross-entropy of classifying the positive example correctly, with f/∑_B f being the prediction of the model. We denote the j-th example in the batch B as s_j,r_j, where the subscript does not refer to time here. As in <cit.>, the optimal probability for this loss is P(d=i|B,c_t,a_t) (with [d=i] indicating the i-th example in B is the positive example) and can be derived as follows: P(d=i|B,c_t,a_t) =P(s_i,r_i|c_t,a_t) Π_l≠ i P(s_l,r_l|a_t)/∑_j=1^MP(s_j,r_j|c_t,a_t)Π_l≠ j P(s_l,r_l|a_t) =P(s_i,r_i|c_t,a_t)/P(s_i,r_i|a_t)/∑_j=1^MP(s_j,r_j|c_t,a_t)/P(s_j,r_j|a_t). Eq. <ref> means that for any s_t+1,r_t,c_t,a_t that are part of a batch B in the data, we have that f(s_t+1,r_t,c_t,a_t) ∝P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t). From Assumption <ref>, for any sufficient statistic transition tuple s_t+1,r_t,c_t,a_t there exists a batch it is a part of. Let Assumption <ref>, and let the loss in <ref> be jointly minimized by f,g,g_AR. Then I(s_t+1,r_t;c_t|a_t) ≥log(M-1) - ℒ_opt. Given the optimal value shown in Lemma <ref> for f(s_t+1, r_t, c_t, a_t), by inserting back into the loss we get: ℒ_opt=-𝔼log[P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)/P(s_t+1,r_t|a_t) + ∑_(s',r') ∈{o_j^-}_j=1^M-1P(s',r'|c_t,a_t)/P(s',r'|a_t)] =𝔼log [1 + (P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)∑_(s',r') ∈{o_j^-}_j=1^M-1P(s',r'|c_t,a_t)/P(s',r'|a_t) ] ≈𝔼log [1 + (P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)(M-1) ·𝔼_𝒟(s',r'|a_t)P(s',r'|c_t,a_t)/P(s',r'|a_t) ] =𝔼log [1 + (P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)(M-1) ] ≥𝔼log [ P(s_t+1,r_t|a_t)/P(s_t+1,r_t|c_t,a_t)(M-1) ] =-I(s_t+1,r_t;c_t|a_t)+log(M-1). We therefore get that I(s_t+1,r_t;c_t|a_t)≥log(M-1) - ℒ_opt. We conclude that the objective maximizes the mutual information between c_t and s_t+1,r_t given a_t. Let Assumption <ref>, and let the loss in <ref> be jointly minimized by f,g,g_AR, then I(c_t; s_t+1,r_t|a_t) = I(h_t; s_t+1,r_t|a_t) where I(·; ·) denotes mutual information. Since s_t+1,r_t depend only on h_t (conditioned on a_t), and since c_t is a deterministic function of h_t, I(s_t+1,r_t;c_t|a_t) cannot be greater than I(s_t+1,r_t;h_t|a_t). From Lemma <ref> , we therefore have that I(c_t; s_t+1,r_t|a_t) = I(h_t; s_t+1,r_t|a_t) . Note that Corollary <ref> states that given the causal model above, c_t is maximally informative about s_t+1,r_t (conditioned on a_t). We use this result to prove a short lemma that will help us prove that that c_t is an information state. Let the assumptions of Corollary <ref> hold, then for every a, P(s_t+1,r_t|h_t,a_t) = P(s_t+1,r_t|c_t,a_t). We start with a result similar to the data processing inequality. Consider I(s_t+1,r_t;h_t, c_t|a_t). We have that I(s_t+1,r_t;h_t, c_t|a_t) = I(s_t+1,r_t;c_t| h_t,a_t) + I(s_t+1,r_t ;h_t|a_t), and on the other hand, I(s_t+1,r_t;h_t, c_t|a_t) = I(s_t+1,r_t;h_t| c_t,a_t) + I(s_t+1,r_t;c_t|a_t). From the causal graph above, we have that I(s_t+1,r_t;c_t| h_t,a_t) = 0. Therefore, from Eq. (<ref>) and (<ref>) we have I(s_t+1,r_t;h_t|a_t) = I(s_t+1,r_t;c_t|a_t) + I(s_t+1,r_t;h_t| c_t,a_t) ≥ I(s_t+1,r_t;c_t|a_t) with equality only if I(s_t+1,r_t;h_t| c_t,a_t)=0, since the mutual information is positive. From Corollary <ref>, we therefore must have I(s_t+1,r_t;h_t| c_t,a_t)=0. This implies that s_t+1,r_t and h_t are independent conditioned on c_t,a_t <cit.>, and therefore P(s_t+1,r_t|h_t,a_t) = P(s_t+1,r_t|c_t,a_t) . Let Assumption <ref>, and let the loss in <ref> be jointly minimized by f,g,g_AR, then c_t satisfies P1, i.e., 𝔼[r_t|h_t,a_t] = 𝔼[r_t|c_t,a_t]. 𝔼[r_t|h_t,a_t] = ∫ r_t ∫ P(s_t+1, r_t|h_t,a_t) ds_t+1 dr_t = ∫ r_t ∫ P(s_t+1,r_t|c_t,a_t) ds_t+1 dr_t = ∫ r_t P(r_t | c_t, a_t) dr_t = 𝔼[r_t|c_t, a_t]. Let Assumption <ref> and let the loss in <ref> be jointly minimized by f,g,g_AR, then c_t satisfies P2, i.e., P(c_t+1|h_t) = P(c_t+1|c_t). P(c_t+1|h_t,a_t) =∫∫ P(s_t+1,r_t|h_t,a_t)P(c_t+1|h_t, s_t+1,r_t,a_t)ds_t+1 dr_t = ∫∫ P(s_t+1,r_t|c_t,a_t)P(c_t+1|h_t, c_t, s_t+1,r_t,a_t)ds_t+1dr_t = ∫∫ P(s_t+1,r_t|c_t,a_t)P(c_t+1|c_t, s_t+1,r_t,a_t)ds_t+1dr_t = P(c_t+1|c_t,a_t). where the second equality is due to lemma <ref> and the penultimate equality is due to c_t+1 being a deterministic function of c_t,s_t+1,r_t and a_t We now provide the proofs for the setting described in Section <ref>, where there may be errors in the CPC learning, and the data does not necessarily satisfy Assumption <ref>. We recapitulate that we consider an iterative policy improvement algorithm with a similarity constraint on consecutive policies, similar to the PPO algorithm we use in practice <cit.>. We shall bound the suboptimality of policy improvement, when data for training CPC is collected using the previous policy, denoted π_k. We will show optimal policy bounds when the information state is approximate, similar in spirit to <cit.>, but with additional technicalities. Under the setting above, we will bound the suboptimality in policy improvement in terms of an error in CPC training, which we denote ϵ. In light of the bound from <ref>, we assume the following: <ref> There exists an ϵ such that for every t≤ T, I(s_t+1,r_t;c_t|a_t) ≥ I(s_t+1,r_t;h_t|a_t) - ϵ, where the histories are distributed according to policy π_k. theorem-1 We now define P_π(h_t) as the probability of seeing a history under a policy π. For the sake of simplicity, for the subsequent section we will refer to P_π(h_t) as P(h_t). Furthermore, when the information state is approximate, we denote the information state generator σ̂_t. We begin with the following bound. Let Assumption <ref> hold, then 𝔼_h_t ∼ P(h_t) [D_KL(P(s_t+1,r_t|h_t,a_t) || P(s_t+1,r_t|σ̂_t(h_t),a_t) ]≤ϵ Let Assumption <ref> hold, then I(s_t+1,r_t;h_t|c_t,a_t) ≤ϵ We start with a result similar to the data processing inequality. We have that I(s_t+1,r_t;h_t,c_t|a_t)= I(s_t+1,r_t;c_t|h_t,a_t) + I(s_t+1,r_t;h_t|a_t) and, I(s_t+1,r_t;h_t,c_t|a_t)= I(s_t+1,r_t;h_t|c_t,a_t) + I(s_t+1,r_t;c_t|a_t) From the causal graph we have that I(s_t+1,r_t;c_t|h_t,a_t)=0, yielding I(s_t+1,r_t;h_t|a_t) = I(s_t+1,r_t;h_t|c_t,a_t) + I(s_t+1,r_t;c_t|a_t) ⇒ I(s_t+1,r_t;h_t|a_t) - I(s_t+1,r_t;c_t|a_t) = I(s_t+1,r_t;h_t|c_t,a_t) Combined with <ref> we get that I(s_t+1,r_t;h_t|c_t,a_t) ≤ϵ We note that from here on out everything is conditioned on a_t, and omit it to avoid overly cumbersome notation. For ease of notation we define: z=s_t+1,r_t. We note that given a specific h_t, we have: D_KL (P_z|h_t || P_z|σ̂_t(h_t) )=∫_z P(z|h_t) ·log ( P(z|h_t)/P(z|c_t) ) I(z;h_t | c_t) = 𝔼_h_t ∼ P(h_t) [D_KL (P_z|h_t || P_z|σ̂_t(h_t) ) ] I(z;h_t|c_t) =𝔼_P_σ_t(h_t)=c_t [D_KL ( P_z,h_t|c_t || P_z|c_t· P_h_t | c_t ) ] =𝔼_P_σ_t(h_t)=c_t [∫_h_t∫_zP(z,h_t|c_t) log ( P(z,h_t|c_t)/P(z|c_t) · P(h_t | c_t) ) ] =𝔼_P_σ_t(h_t)=c_t [∫_h_t∫_zP(z,h_t|c_t) log ( P(z,h_t,c_t)· P(c_t)/P(z,c_t) · P(h_t, c_t) ) ] =𝔼_P_σ_t(h_t)=c_t [∫_h_t∫_zP(z,h_t,c_t)/P(c_t) log ( P(z|h_t)/P(z|c_t) ) ] = ∫_c_t∫_h_t∫_zP(z,h_t,c_t) log ( P(z|h_t)/P(z|c_t) ) = ∫_c_t∫_h_t∫_zP(z|h_t)P(h_t,c_t) log ( P(z|h_t)/P(z|c_t) ) = ∫_c_t∫_h_t P(h_t,c_t)∫_zP(z|h_t) log ( P(z|h_t)/P(z|c_t) ) = ∫_h_t P(h_t)∫_c_tδ_c_t=σ_t(h_t)∫_zP(z|h_t) log ( P(z|h_t)/P(z|c_t) ) =𝔼_h_t ∼ P(h_t) [D_KL(P(s_t+1,r_t|h_t) || P(s_t+1,r_t|σ̂_t(h_t)) ] We now complete the proof. Combining Proposition <ref> with Proposition <ref> we get that 𝔼_h_t ∼ P(h_t) [D_KL (P_s_t+1,r_t|h_t || P_s_t+1,r_t|σ̂_t(h_t) ) ]≤ϵ as required. Let π_k(σ̂_t(h_t)) denote the policy at iteration k, and note that it is defined on the information state. At iteration k+1, we first collect data using π_k. We denote P_π_k(h_t) the probability of observing a history in this data collection process. We then use CPC to learn an approximate information state. Let D(h_t) = D_KL (P_s_t+1,r_t|h_t,a_t || P_s_t+1,r_t|σ̂_t(h_t),a_t ). Let Assumption <ref> hold, then ∑_h_t P_π_k(h_t) D(h_t) ≤ϵ. Assumption <ref> holds, therefore the result is an immediate corollary from <ref> for every t∈ 0,1,…,T-1. For some distance measure D, let Π_β = {π : D(π(h_t), π_k(σ̂_t(h_t))) ≤β ∀ h_t } denote the set of policies that are β-similar to π_k. We next define the optimal next policy π^* π^* ∈_π∈Π_β𝔼^π[ ∑_t=0^T-1 r(s_t,a_t) ]. Note that the value of this policy satisfies the following Bellman optimality equations: Q_t(h_t, a_t) = r(h_t, a_t) + 𝔼[ V_t+1(h_t+1)] V_t(h_t) = max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q_t(h_t, a), for t≤ T, and V_T(h_T) = 0. We now present our main result, where we consider an iterative policy improvement scheme based on the approximate information state of ContraBAR and provide policy improvement bounds. <ref> Let Assumption <ref> hold for some representation c_t. Consider the distance function between two distributions D(P_1(x),P_2(x)) = max_x | P_1(x)/P_2(x) |. We let r̂(c_t,a_t)=𝔼[r_t|c_t,a_t] and P̂(c'|c_t,a_t)=𝔼[1(c_t+1=c')|c_t,a_t] denote an approximate reward and transition kernel, respectively. Define the value functions Q̂_t(c_t, a_t) = r̂(c_t, a_t) + ∑_c_t+1P̂(c_t+1|c_t,a_t) V̂_t+1(c_t+1) V̂_t(c_t) = max_π: D(π(c_t), π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a), <ref>2.2em for t≤ T, and V̂_T(c_T) = 0, and the approximate optimal policy π̂(c_t)∈_π: D(π, π_k(c_t)) ≤β∑_aπ(a) Q̂_t(c_t, a). <ref> Let the optimal policy π^*(h_t) be defined similarly, but with h_t replacing c_t in (<ref>) and (<ref>). Then we have that 𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ϵ^1/3 R_max T^2 (√(2) + 4β^T). theorem-1 Since Assumption <ref> holds, Proposition <ref> does as well. From the Markov inequality, we have P_π_k(D(h_t) ≥ n ϵ) ≤ϵ/n ϵ = 1/n. We now the define the “Good Set” H_G = { h_t : D(h_t) < n ϵ} and the “Bad Set” H_B = { h_t : D(h_t) ≥ n ϵ}. Next, we define an auxiliary policy π̃(h_t) = π^*(h_t), if h_t ∈ H_G worst behavior, if h_t ∈ H_B. We will assume that after observing h_t ∈ H_B, the policy performs as bad as possible for the rest of the episode. Next, we bound the performance of π̃. We have that 𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̃[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ 2T^2 R_maxβ^T/n . We will denote by r_t(h_t) the reward at the last state-action pair. That is, for h_t = s_0,a_0,r_0,…,s_t-1,a_t-1,r_t-1,s_t we set r_t(h_t) = r_t-1. We will denote R(h_t) the sum of rewards, that is, R(h_t) = ∑_t'=0^t-1 r_t'. We also denote by P_π(h_t) the probability of observing history h_t under policy π. Note that by definition ∑_t=0^T-1∑_h_t P_π(h_t) = 1. Also, note that by the definition of the set Π_β, for any two policies π_1,π_2 ∈Π_β we have P_π_1(h_t)/P_π_2(h_t) ≤β^t. We now claim that 𝔼^π̃[ ∑_t=0^T-1 r_t ] ≥𝔼^π^*[ ∑_t=0^T-1 r_t ] - 2T^2 R_maxβ^T/n. We first estimate the probability that policy π̃ encounters a history in H_B. Consider some t∈ 0,…,T-1. We have that under P_π_k, with probability at most 1/n, h_t ∈ H_B. Under P_π̃, with probability at most β^t/n, h_t ∈ H_B. From the union bound, with probability at most Tβ^T/n the policy visits at least one history in H_B. Let H̅_B denote the set of T-length histories that visit a history in H_B, and let H̅_G be its complement set. Now, note that 𝔼^π̃[ ∑_t=0^T-1 r_t ] = ∑_h_T P_π̃(h_T) R(h_T) = ∑_h_T∈H̅_G P_π̃(h_T) R(h_T) + ∑_h_T∈H̅_B P_π̃(h_T) R(h_T) = ∑_h_T∈H̅_G P_π^*(h_T) R(h_T) + ∑_h_T∈H̅_B P_π̃(h_T) R(h_T) ≥∑_h_T∈H̅_G P_π^*(h_T) R(h_T) + (Tβ^T/n) T (-R_max) = ∑_h_T P_π^*(h_T) R(h_T) - ∑_h_T∈H̅_B P_π^*(h_T) R(h_T) + (Tβ^T/n) T (-R_max) ≥𝔼^π^*[ ∑_t=0^T-1 r_t ] -2T^2 R_maxβ^T/n The third equality is from the definition of π^*. The fourth inequality relies on the reward function being bounded, i.e R(h_T) ≥ T(-R_max). This alongside the fact that ∑_h_T ∈H̅_BP_π̃(h_T) ≥ (TB^T/n) gives us the inequality. Note that the last inequality follows from the definition of π_*, wherein the probability of visiting at least one history in H_B is the same for π_* and π̃. Next, we note that using Pinsker's inequality, we have d_TV(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t) ≤√(2 d_KL(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t)), and that |𝔼[r_t|h_t, a_t] - 𝔼[r_t|c_t, a_t]| ≤ R_max d_TV(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t) |𝔼[V_t+1|h_t, a_t] - 𝔼[V_t+1|c_t, a_t]| ≤ R_max(T-t) d_TV(P_s_t+1,r_t|h_t,a_t,P_s_t+1,r_t|c_t,a_t) We next prove the following result. We have that Q̂_t(σ̂_t(h_t), a) ≥ Q^π̃(h_t,a) -α_t, V̂_t(σ̂_t(h_t)) ≥ V^π̃(h_t) -α_t, where α_t satisfies the following recursion: α_T = 0, and α_t = √(2 n ϵ)R_max (T-t+1) + α_t+1. We prove by backward induction. The argument holds for T by definition. Assume that Equation (<ref>) holds at time t+1, and consider time t. If h_t ∈ H_B, then by definition Q̂_t(σ̂_t(h_t), a) ≥ Q^π̃(h_t,a), since π̃ will take the worst possible actions after observing h_t. Otherwise, h_t ∈ H_G and we have Q^π̃(h_t,a) - Q̂_t(σ̂_t(h_t), a) = 𝔼[r_t|h_t, a] + 𝔼[V^π̃_t+1(h_t+1)|h_t, a] - r̂(σ̂_t(h_t), a) - ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1) = 𝔼[r_t|h_t, a] - 𝔼[r_t|c_t,a_t] + 𝔼[V^π̃_t+1(h_t+1)|h_t, a] - 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a] + 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a] - ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1) ≤ √(2 n ϵ)R_max + α_t+1 + √(2 n ϵ)R_max(T-t). We note that for h_t ∈ H_G, D(h_t) ≤ nϵ, yielding the d_TV bounds. For the second part, If h_t ∈ H_B, then by definition V̂_t(σ̂_t(h_t)) ≥ V^π̃(h_t). Otherwise, h_t ∈ H_G and we have V^π̃_t(h_t) - V̂_t(σ̂_t(h_t)) = max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q^π̃_t(h_t, a) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q̂_t(σ̂_t(h_t), a) ≤max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) (Q̂_t(σ̂_t(h_t), a)+ α_t) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q̂_t(σ̂_t(h_t), a) =α_t. We next define another auxiliary policy π̃̂̃(h_t) = π̂(h_t), if h_t ∈ H_G optimal behavior, if h_t ∈ H_B. We will assume that after observing h_t ∈ H_B, the policy perform optimally for the rest of the episode. Therefore, V^π̃̂̃_t(h_t ∈ H_B) = V_t(h_t). We have the following results, analogous to Propositions <ref> and <ref>. We have that 𝔼^π̃̂̃[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ 2T^2 R_maxβ^T/n . Analogous to the proof of Proposition <ref>. We have that Q̂_t(σ̂_t(h_t), a) ≤ Q^π̃̂̃(h_t,a) + α_t, V̂_t(σ̂_t(h_t)) ≤ V^π̃̂̃(h_t) +α_t, where α_t satisfies the following recursion: α_T = 0, and α_t = √(2 n ϵ)R_max (T-t+1) + α_t+1. Similarly to the proof of Proposition <ref>. The argument hold for T by definition. Assume that Equation (<ref>) holds at time t+1, and consider time t. If h_t ∈ H_B, then by definition Q̂_t(σ̂_t(h_t), a) ≤ Q^π̃̂̃(h_t,a), since π̃̂̃ will take the best possible actions after observing h_t. Otherwise, h_t ∈ H_G and we have Q̂_t(σ̂_t(h_t), a) - Q^π̃̂̃(h_t,a) = r̂(σ̂_t(h_t), a) + ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1) - 𝔼[r_t|h_t, a] - 𝔼[V^π̃̂̃_t+1(h_t+1)|h_t, a] = 𝔼[r_t|σ̂_t(h_t),a_t] - 𝔼[r_t|h_t, a] + 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a] - 𝔼[V^π̃̂̃_t+1(h_t+1)|h_t, a] + ∑_c_t+1P̂(c_t+1|σ̂_t(h_t),a)V̂_t+1(c_t+1) - 𝔼[V̂_t+1(σ̂_t+1(h_t+1))|h_t, a] ≤ √(2 n ϵ)R_max + α_t+1 + √(2 n ϵ)R_max(T-t). For the second part, If h_t ∈ H_B, then by definition V̂_t(σ̂_t(h_t)) ≤ V^π̃̂̃(h_t). Otherwise, h_t ∈ H_G and we have V̂_t(σ̂_t(h_t)) - V^π̃̂̃_t(h_t) = max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q̂_t(σ̂_t(h_t), a) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q^π̃̂̃_t(h_t, a) ≤max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) (Q^π̃̂̃_t(h_t, a) + α_t) - max_π: D(π(h_t), π_k(h_t)) ≤β∑_aπ(a) Q^π̃̂̃_t(h_t, a) =α_t. We have 𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤𝔼^π̃[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] + 2T^2 R_maxβ^T/n = ∑_h_0 P(h_0) ( V_0^π̃(h_0) - V_0^π̂(h_0)) + 2T^2 R_maxβ^T/n ≤∑_h_0 P(h_0) ( V_0^π̃(h_0) - V̂_0(σ̂_0(h_0)) + V̂_0(σ̂_0(h_0)) - V_0^π̂(h_0)) + 2T^2 R_maxβ^T/n ≤∑_h_0 P(h_0) ( V_0^π̃(h_0) - V̂_0(σ̂_0(h_0)) + V̂_0(σ̂_0(h_0)) - V_0^π̃̂̃(h_0)) + 4T^2 R_maxβ^T/n ≤ 2α_0 + 4T^2 R_maxβ^T/n We note that the first inequality follows from Proposition <ref>. The second equality stems from the fact that P(h_0)=P(s_0), which is not affected by the choice of policy. The fourth transition follows from the addition and subtraction of V_0^π̃̂̃(h_0) and the use of Proposition <ref>. The final inequality follows from Propositions <ref> and <ref>. Let us bound α_0. By the recursion α_t = √(2 n ϵ)R_max (T-t+1) + α_t+1 we have that α_0 = T^2/2√(2 n ϵ)R_max. Setting n = ϵ^-1/3 we obtain the desired result: 𝔼^π^*[ ∑_t=0^T-1 r(s_t,a_t) ] - 𝔼^π̂[ ∑_t=0^T-1 r(s_t,a_t) ] ≤ T^2√(2)ϵ^1/3 R_max + 4T^2 R_maxβ^T ϵ^1/3 = ϵ^1/3 R_max T^2 (√(2) + 4β^T). § ENVIRONMENTS Reacher Image: In this environment, a two-link planar robot needs to reach an unknown goal as in <cit.>, except that the goal is randomly chosen along a horizontal section of 0.48. For each task, the agent receives a reward of +1 if it is within a small radius r=0.05 of the goal, and 0 otherwise. r_t= 1, if ‖ x_t - x_goal‖_2≤ 0.05 0, otherwise where x_t is the location of the robot’s end effector. The agent observes single-channel images of size 64 × 64 of the environment]. The horizon is set to 50 and we aggregate k = 2 consecutive episodes to form a trajectory of length 100. Panda Reacher: A Franka Panda robot tasked with placing the end effector at a goal on a 2d semi-circle of radius 0.15 with fixed z=0.15/2 in 3d-space. The task is adapted from the Reacher task in Panda Gym <cit.>, with the goal occluded. For each task, the agent receives a reward of +1 if it is within a small radius r = 0.05 of the goal, and 0 otherwise. r_t= 1, if ‖ x_t - x_goal‖_2≤ 0.05 0, otherwise where x_t is the current location of the end effector. The action space is 3-dimensional and bounded [-1, 1]^3. The agent observes a 3-channel image of size 84 × 84 of the environment. We set the horizon to 50 and aggregate k=3 consecutive episodes to form a trajectory of length 150. Panda Wind: This environment is identical to Panda Reacher, except that the goal is fixed and for each task the agent experiences different wind with shifts the transition function, such that for an MDP ℳ the transition function becomes s_t+1 = s_t + a_t + w_ℳ where w_ℳ is task specific and drawn randomly from a circle of radius 0.1. To get to the goal and stay there, the agent must learn to quickly adapt in a way that cancels the effect of the wind. § IMPLEMENTATION DETAILS In this section we outline our training process and implementation details, exact hyperparameters can be found in our code at <https://github.com/ec2604/ContraBAR> The CPC component termed g_AR consists of a recurrent encoder, which at time step t takes as input the tuple (a_t, r_t+1, s_t+1). The state, reward and action are passed each through a different fc layer (or a cnn feature-extractor for the states in image-based inputs). Our CPC projection head takes in (c_t^a, z_t+k) and passes it through one hidden layer of half the input size, with an ELU activation function. § ARCHITECTURE DETAILS In this section we detail practical considerations regarding the CPC architecture. In Section <ref> we described situations where c_t does not need to encode belief regarding the task in order to distinguish between positive and negative observations. This is detrimental to learning a sound sufficient statistic as we would like c_t to encode information regarding the reward and transition functions, as they are what set apart each task. In order to prevent this “shortcut” from being used, we can perform hard negative mining. We do this by using negative observations that cannot be distinguished from the positive observation without belief regarding the transition and reward functions. In the case where only the reward functions vary, we can do this by taking the state and action of the positive observation and sampling a new reward function. We then calculate the respective reward and embed it as a negative observation alongside the original state and action. By having the positive and negative observations share the same state and action, we ensure that c_t must be informative regarding the reward function in order to distinguish between positive and negative observations. We note that in this modified setup we use s_t as the initial hidden state for the action-gru and include the original c_t as input to the CPC projection head. This ensures that the gradient of the loss with respect to the action-gru does not affect c_t, which should encode information regarding the reward function. For the case where the environments only vary in reward functions, we propose a simpler solution which is to omit the action-gru, as the future actions except for a_t+k-1 do not affect r_t+k. We can simply use (c_t,z_t+k) as input to the CPC projection head – we note that in this case z_t+k is an embedding of the reward, state and action. We found in practice that this simplification also worked well for the environments we used where the transitions varied. The modified architecture where the action-gru is omitted can be seen in Figure <ref>. In Figure <ref> we demonstrate on the Ant-Goal environment that omitting the action-gru and reward-relabeling with the action-gru yield similar results. Finally, we note that hard-negative mining can be done for varying transitions by sampling a random transition from the prior and simulating the transition to some s_t+k given s_t, a_t,…, a_t+k. § IMAGE-BASED INPUTS ARE COMPUTATIONALLY RESTRICTIVE FOR VARIBAD To understand the computational restriction in VariBAD <cit.>, we look to the formulation of the VAE objective. For every timestep t, the past trajectory τ_:t is encoded to infer the posterior q(m|τ_:t), and used by the decoder to reconstruct the entire trajectory including the future. In our analysis we restrict ourselves to the memory required for reconstruction of the reward trajectory, in an image-based domain, under the following assumptions: [label=(*)] * images of dimension d× d × 3 are embedded to a representation of size 32 via 3 convolutions with 32 channels each and kernels of size, with strides 2,2,1 respectively. * actions are of size 2 and embedded with a linear layer of size 16 * The trajectory is of length 120, which is average for the domains in meta RL . We draw attention to the fact that reward decoder in VariBAD receives s_t, s_t+1 as input, requiring us to take into consideration the memory required for embedding the image trajectory. On top of this, we also consider three times the size of the parameters of the image encoder (parameters, gradients and gradient moments). We present the memory consumption as a function of the image dimensions in <ref>. We note that in practice we often wish to decode multiple trajectories at once, and we also need to take into account encoder portion of the model as well as its gradients.
http://arxiv.org/abs/2306.06770v2
20230611205014
Improving Knowledge Extraction from LLMs for Robotic Task Learning through Agent Analysis
[ "James R. Kirk", "Robert E. Wray", "Peter Lindes" ]
cs.AI
[ "cs.AI", "cs.HC", "cs.RO", "I.2.6; I.2.7" ]
Comment on “Matter-wave interferometry with helium atoms in low-l Rydberg states” J. D. D. Martin July 31, 2023 ================================================================================= Large language models (LLMs) offer significant promise as a knowledge source for robotic task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM but alone is insufficient for acquiring relevant, situationally grounded knowledge for an embodied robotic agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations, and thus enabling a robot to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous robot, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how a robot, by retrieving and evaluating a breadth of responses from the LLM, can achieve >75% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as indication of preference) is provided, while greatly reducing how much human oversight is needed. § INTRODUCTION Prompt engineering <cit.>, along with few-shot prompting <cit.>, has been shown to be an effective strategy for retrieving reasonable knowledge from a large language model (LLM) for robotic tasks. However, prompt engineering is not sufficient for robots. While “reasonableness” is often used as a criterion for accepting LLM responses <cit.>, a reasonable response will often fail in a specific circumstance for a robot. In addition to being reasonable (from the point-of-view of human interpretation given the task), the responses that a robot chooses to use must also be: * Interpretable by the robot's parsing capabilities. Robots typically lack human-level language capabilities. LLM responses must be understandable by the robot, meaning grammar and terminology is presented in a form that the agent can actually process. * Situated to the robot's environment. Objects, features, and relations referenced in an LLM response must be perceivable and unambiguously identifiable in the environment for the robot to ground the response successfully. * Matched to robot's embodiment and affordances. An LLM, trained on a large corpus describing human activities, will (generally) generate responses conforming with human embodiment and affordances. Responses that do not take into account a robot's often non-human embodiment (e.g., a single arm) will often be infeasible for that robot to execute. * Matched to individual human preferences. Users will have individual expectations about how tasks should be performed and what constitutes appropriate goals and behaviors in the current situation. Task success requires identifying and conforming to these preferences. The first three requirements are necessary for a robot to use an LLM response to act. We define such responses as viable. The final requirement is necessary to achieve the task as preferred by the human user. A response is situationally relevant if it is viable and matches the preferences of the user <cit.>. To attempt to meet all of these requirements, we describe and evaluate a novel strategy that exploits existing prompting methods within a task-learning robot: Search Tree, Analyze and Repair, and Select (STARS). Similar to “agentic” uses of LLMs <cit.>, we employ the LLM as a component within a larger system. Like self-consistency <cit.>, STARS generates a large space of responses from the LLM (multiple responses to a query, situation, or specific task). However, instead of voting, the robot analyzes and evaluates each response for issues (mismatched embodiment, unknown words, ungrounded references) and attempts to repair these responses via targeted re-prompting of the LLM. To select among candidates, the agent queries the LLM for a “preferred” response. Finally, when a user is available, the robot solicits feedback, ensuring that learning incorporates user preferences. STARS produces a set of viable, reasonable responses for goal state descriptions. The robot then uses these descriptions to learn to achieve the task. We evaluate contributions of components of STARS and how it reduces user oversight. STARS achieves >75% task completion in the absence of oversight, and 100% task completion with oversight in our experiments. Further, tasks and subtasks are learned in one shot; a subtask (such as storing a dish) is learned in a single learning trial. In subsequent situations, neither the LLM nor oversight are needed for the robot to perform the subtask. § RELATED WORK Core features of our approach are 1) online task learning (no pre-training for domain or task), 2) the exploitation of multiple sources of knowledge, 3) proactive evaluation of LLM responses, and 4) one-shot task learning. We review related work in terms of these solution features. Inner Monologue <cit.> modifies its prompts based on feedback from the environment, robot, and user to elicit new responses when an action fails. Repair focuses on a single response at a time; STARS analyzes a set of responses to determine the result of using them, making evaluations and repairs before any response is selected and used. <cit.> plan sequences of subgoals from multiple LLM responses obtained from beam search (as in STARS) that does re-ranking based on feedback from the environment. SayCan <cit.> uses an LLM along with a trained set of low-level robot skills with short language descriptions for objects. The LLM is prompted multiple times for a high-level task to retrieve one low-level step at a time until a complete plan is found. To obtain knowledge of low-level tasks, SayCan is trained on over 68K teleoperated demonstrations and human-rated simulations. STARS embeds a few properties for object classes (e.g., whether an object can be “grabbed" by the robot) but requires no pre-training or prior exposure to the domain. TidyBot <cit.> and TIDEE <cit.> address robotic problems similar to our experimental task (tidying a kitchen). They also account for human preferences. TidyBot tries to elicit human preferences by having the LLM summarize over a few answers given by a human. TIDEE attempts to learn preferences by using “commonsense priors” learned previously by performing tasks in a “training house.” STARS does not depend on pre-training, but does elicit human preferences via NL dialogues. STARS attempts to verify LLM responses before attempting to achieve the goal indicated by a response. There are many approaches to verification of LLM knowledge, including 1) response sampling <cit.>, 2) use of other sources of knowledge (e.g., planning <cit.>, LLM <cit.>), and 3) human feedback/annotation (TidyBot). Recursively Criticizes and Improves (RCI) <cit.> verifies LLM output by prompting the LLM again to identify (potential) issues. <cit.> train a verifier to rank responses, while self-consistency <cit.> uses voting to select an answer. <cit.> combine all three of the above verification strategies, by eliciting responses from an LLM, ranking responses using an uncertainty metric (a source of knowledge other than the LLM), and then having humans annotate responses for further exploration. While these efforts address similar challenges (or aspects of them), a unique aspect of STARS is the proactive analysis of many responses (retrieved via prompting an LLM) through embodied reasoning. Analysis enables identification of known problems and targeted repair. The STARS robot also learns task knowledge in one-shot, during performance, without prior training. When confronted with the same or similar tasks in the future, the robot can efficiently execute the task without use of the LLM (or STARS). Encoding persistent task knowledge contrasts with in-context learning <cit.>. § APPROACH [9]r75mm < g r a p h i c s > ITL process for learning goals and policy. STARS extends an existing Interactive Task Learning (ITL, <cit.>) agent that learns a variety of diverse tasks (from puzzles to mobile patrol tasks) in many different physical (Fetch robot, mobile MAGIC robot, and tabletop arm) and simulated (e.g., AI2Thor, April simulator) robotic domains <cit.>. The ITL agent learns novel tasks from user descriptions of tasks and goals. Figure <ref> depicts the ITL learning process for goals the agent elicits from a user as part of learning the task. The entire ITL agent learning pipeline can also learn new concepts, new actions (when planning knowledge is insufficient) and lower-level skills via instruction. We focus on the goal-learning pipeline here because our method is to exploit an LLM to provide goal descriptions while leaving other aspects of the agent's task-learning pipeline unchanged. When a policy for achieving a goal is unknown, planning is used to find a sequence of actions consistent with the goal. If the planned actions achieve the goal, the agent performs a retrospective analysis of the executed actions. The purpose of the retrospective analysis is to learn long-term policy knowledge, enabling the agent to achieve the task goal in the future without further planning or learning (one-shot learning). This learning process has been used successfully in many domains, but depends on substantial user input to provide interpretable and accurate descriptions of goals. With STARS, the robot retrieves goal descriptions from the LLM (the rest of the task-learning process is the same). The purpose of STARS is to ensure that the goal descriptions it retrieves from the LLM are viable for the robot. When task goals obtained from the LLM are achieved by the robot, the existing retrospective analysis enables complete task-learning (with reduced user oversight). Below, we first outline a baseline prompting strategy, and then describe STARS, which supplements and extends the baseline approach to improve robotic task learning. [8]r75mm < g r a p h i c s > Baseline approach to elicitation of goal descriptions via template-based prompting (TBT). §.§ Prior Baseline: Template-based Prompting The robot employs template-based prompting <cit.> to elicit responses from the LLM. Templates enable the robot to construct prompts using context from the task and environment and introduce prompt examples matched to the robot's capabilities and embodiment. Figure <ref> outlines the baseline template-based prompting approach for generating task-goal descriptions (i.e., it replaces the NL-dialogue for “Get goal description from user" in Figure <ref>). A prompt template is chosen and instantiated with relevant context, the LLM is queried (potentially soliciting multiple responses with varying temperatures), and response(s) are chosen for execution. In this baseline approach, choices are ranked by the mean log probabilities of tokens in each response. Oversight, when present, is used to select an LLM response or to give a goal description (when all LLM-generated choices are unacceptable). The chosen response is used by the robot to attempt to perform the task and, if successful, learn a policy to execute the task in the future (as in Figure <ref>). Few-shot examples in the prompt bias the LLM toward responses that are viable and relevant, matching robot's NLP capabilities, desired semantic content (e.g., simple goal statements), and embodiment limitations <cit.>. This baseline approach learns the task in one shot but requires substantial user oversight to overcome errors <cit.>. STARS extends and supplements the baseline with three processes to address limitations of the baseline: retrieving a tree of LLM responses via beam search, analyzing and repairing responses, and using an LLM to select a goal response from the candidates. Below, we discuss various patterns of errors that motivated our approach and how STARS mitigates them. Finally we describe the optional oversight strategy of soliciting user feedback. Figure <ref> outlines the process of the STARS approach (blue boxes are re-purposed elements from TBP; green boxes are new components of STARS). §.§ Search Tree (ST) In prior work (Figure <ref>), we used higher temperature parameters to retrieve multiple responses for the same prompt. This approach resulted in many duplicate responses and more responses that were not viable, deviating from targeted content and form. Similar to others <cit.>, we enable the robot to use a beam-search strategy to generate a breadth of high probability responses from a single prompt. §.§ Analyze and Repair (AR) While many of the responses retrieved from the LLM are reasonable, they often fail to meet other requirements: being matched to the robot's embodiment, language capabilities, and situation. A robot that attempts to use such a mismatched response will fail to achieve the task. Analysis and Repair detects and categorizes mismatches, drawing on the cognitive agent's knowledge and capabilities to identify problems, and then attempts to repair responses with identifiable mismatches. The overall process for Analysis and Repair is illustrated in Figure <ref>. The robot performs a mental simulation of what would happen if it attempted to use a response from the LLM, using the same knowledge of parsing and grounding it uses when performing the task. Analysis includes a linguistic evaluation (orange: whether the robot can parse and interpret the language and terms), a grounding analysis (purple: whether each referent in the response can be grounded to an object observable in the environment), and an analysis of affordances (green: whether the actions on objects implied by clauses in the goal response can be achieved by the robot). The “AR” process currently addresses these three sources of mismatch (future work can extend repair to additional sources of mismatch): * Language: To assess potential interpretability issues, the robot parses the response with its native NLP capabilities and examines the output. The language processor indicates if a sentence can be interpreted and identifies unknown words. * Situation: To detect grounding issues, the robot evaluates the results of its language comprehension process. When a sentence contains a referring expression to an object, such as a cabinet, the robot's language processing identifies grounding candidates observable by the robot. Failure to ground a referent indicates a mismatch with the current situation. * Embodiment and Affordance: The robot detects embodiment and affordance mismatches using its knowledge of objects (semantic memory) and properties detected from perception (environment). E.g., when it processes a clause in a goal response such as “the dish rack is in the cabinet,” it evaluates if the object to be moved (“dish rack") has property “grabbable.” Repair is coupled to these diagnostic mismatches detected during analysis. For each case, to repair the mismatch the robot constructs a new prompt using a template for repairing that category of mismatch. The robot instantiates the template by appending the non-viable response with an instruction indicating the specific mismatch that occurred, e.g., “No. Cannot see a cabinet.” or “No. Rack is not grabbable.”[The appendix includes a complete example of the prompts for repairs and for selection.] This repair prompt is then used to generate a new tree of responses (via ST). §.§ Selection (S) ST and AR are designed to generate viable candidates. The robot must select one response to use. Rather than using mean log probability (as in the Figure <ref> system) or voting (as in self-consistency <cit.>), the new Selection strategy employs the LLM for choosing a response. The robot constructs a prompt with the candidates and asks which is the most reasonable goal given task context. The prompt solicits a single integer response from the LLM indicating which response is the best. §.§ User Oversight (O) The correct goal for a task depends on human preference (e.g., some users prefer to store cereal in the cupboard, others prefer the pantry). The original ITL agent solicited all task knowledge from a human. STARS reduces user interaction while still ensuring correct learning. The robot solicits user feedback by asking if a retrieved goal is correct (yes/no) before using it. Selection determines which option to present. If the first response is rejected, Selection is repeated with the rejected option removed. If all responses are rejected, the user must provide the correct goal description. § EXPERIMENT DESIGN Here we describe an experimental design, measures, and results for evaluating STARS for online learning of a “tidy kitchen” task. We evaluate how well STARS addresses the previously outlined requirements and also examine the relative impact of components of STARS. Setting: A simulated office kitchen with a mobile robot created in the APRIL MAGIC simulator. The robot can move around the kitchen, approach objects, and has a single arm that can grasp and manipulate all objects relevant to the task to be learned. For the experiment, the kitchen is populated with 35 objects that commonly exist in a kitchen (plates, condiments, utensils, etc.). Objects are initially distributed on a table, counter, and in the dish rack. Simulation: Although prior work has used a physical robot <cit.>, this experiment is done in simulation, which is sufficient for investigating grounding of concepts and interpreting and learning from the descriptions provided by STARS. If we were seeking to learn other knowledge, such as low-level control or perceptual classifiers, use of an embodied robot would be necessary. Learning Task: The user presents a single task (“tidy kitchen”) and primary subtasks: clearing, storing, and unloading all the objects from the table, counter, and dish rack. Task success is measured by the fraction of objects moved to a location consistent with user preferences. Also, when another object is manipulated to achieve a task (e.g., opening a refrigerator door to put away ketchup), that object must be in its desired state for task success (e.g., the door must be closed). Four object types have multiple instances that must be treated differently based on their positions (e.g., a mug on the table must be in the dishwasher or sink but a mug in the dish rack must be in the cupboard). Using the approach in Figure <ref> (or a STARS variant as below), the robot acquires goal descriptions for each perceived object. It then uses the processing described in Figure <ref> to learn the goal and action policy, enabling it to correctly process that object in the future without the LLM, planning, or oversight. Experimental conditions: Experimental conditions are enumerated in Table <ref>. The TBP conditions are baselines for assessing the impact of the components of STARS. For all conditions the LLM used is GPT-3 (for TBP, Search Tree, and Repair) and GPT-4 (for Selection). In all conditions, a user provides the initial task. In the Oversight conditions, the user reviews up to 5 responses. In non-oversight conditions, the choice of goal is based on the highest mean log probability of candidates (ST and STAR) or the Selection strategy (STS and STARS). Measures: We assess conditions in three dimensions: performance, response quality, and cost. For performance, task completion rate (number of goal assertions achieved / total number of goal assertions) is the primary measure. For response quality, we evaluate how well responses align with requirements for situational relevance, viability, and reasonableness. User effort is the largest factor impacting cost but cannot be measured directly. To estimate effort we use the number of interactions and words used in those interactions as well as the percentage of accepted goals. LLM costs are evaluated via tokens presented (prompts) and generated (responses). § EXPERIMENTAL RESULTS Discussion of experimental results are organized around the three measures introduced above. Table <ref> summarizes performance (task completion) and costs (tokens; oversight) for the conditions. For this experiment, we report results from only one run for each condition. Experimentation costs (esp. for oversight conditions) is significant while there is minimal variation (attributable to the LLM and STARS) in the overall results. To evaluate reproducibility, we ran the STARS condition 10 times. The task completion rate varied from 75% to 80% with a variance of 3.75. Appendix D further explores variability in experimental outcomes. [24]r57mm < g r a p h i c s > Performance and user cost measures for experimental conditions. Performance: Figure <ref>(a) compares task completion rates for all experimental conditions. The baseline condition, TBP, achieves the experiment-defined targets (e.g., “mug in the dishwasher") only 52.5% of the time. Adding Oversight to the baseline condition (TBP+O) results in 100% task completion, but vastly increases the number of required words (<ref>b). Because many responses from the LLM are not viable and situationally relevant, the user must provide goal descriptions, resulting in many more words of instruction. In the absence of oversight, STAR/STARS deliver a large gain in task completion (STARS: 77.5%). Analysis and Repair prevents the robot from using unviable responses and increases viable responses via repair. Search Tree (ST) alone results in a moderate improvement, but is a prerequisite for AR. With oversight, STARS task completion rises to 100% with much reduced user input. This gain comes from shifting user input from providing goal descriptions (often needed in TBP) to confirming LLM-generated goal descriptions with yes/no responses (STARS+O). In addition, as highlighted in Figure <ref>(c), the greater precision of STARS in generating acceptable goal descriptions results in the user accepting a much larger fraction of the goals in the oversight condition. Selection was not distinguishable from the baseline (mean log probability) choice strategy. Selection improves with greater context. For example, the robot's perception lacks the ability to discriminate between a “clean” and “dirty” mug. In the experiment, dishware on the table is assumed to be dirty (in terms of defining the target outcomes in the design) but the robot lacks this context. When such context is provided to the LLM[Context provided to GPT-4 as a System prompt: “Assume that dishware on the table or counter are dirty. Assume that bottles and cans are empty. Non-perishable food belongs in the pantry.”], this STARS* variation of Selection achieves 92.5% task completion without user oversight. In the future, we will enable the user to provide this context directly. Quality of Responses: Figure <ref> shows the percentage of different classifications of the responses retrieved from the LLM for STARS.[Chart is representative of all conditions except TBP and Oversight; see appendix for each condition.] Responses are classified as: unviable (red), viable but not reasonable (orange), reasonable (yellow), or situationally relevant (green). Further categorization identifies type of mismatch for unviable responses (unknown word, ungrounded object, uninterpretable, affordance mismatch) and reasonable ones (reasonable alternative location, post-completion error, embodiment limitation). “Post-completion error" indicates a reasonable failure to close a door in situations where an object might not have a door. “Embodiment limitation" captures when the robot places an object in a location that would otherwise be reasonable if its sensing were not limited. Over 70% of responses are not viable, leading to failure if the robot executed them; only 13% are situationally relevant, meeting all four requirements. Thus, analysis of responses appears essential for reliable use of a LLM by an embodied agent, to prevent the use of unviable goal descriptions. In the baseline it retrieves situationally relevant responses (at least one) for only 15 of the 35 objects, while STARS results in 100% of the objects having at least situationally relevant responses.[See appendix for graphical analysis of all conditions.] [35]r60mm [b]0.99 < g r a p h i c s > Categorization of responses retrieved from the LLM (STARS condition). [b]0.99 < g r a p h i c s > Fraction of responses used by robot that are reasonable/sit. relevant. [b]0.99 < g r a p h i c s > Total tokens sent to (hatched) and received (solid) from LLM. We also assess quality of response by evaluating how frequently the robot receives a viable, (at least) reasonable response (Figure <ref>). STARS (and STAR) results in 100% of the used responses being at least reasonable (mostly situationally relevant). This indicates that the 77.5% task completion STARS achieves is close to the best it can achieve in the absence of oversight. Human input is necessary to differentiate situationally relevant goals from reasonable ones. Cost: Table <ref> shows that oversight, in the form of instructions and words is reduced by STARS (from 403 words to 127). While the magnitude of reduction in instructions is modest, the user is now asked to confirm a goal with a single word in comparison to supplying a complete goal description (none of which are needed in STARS+O). STARS+O also increases the precision of responses presented to the user (Figure <ref>c); 70% of presented responses are accepted as correct. Figure <ref> summarizes LLM cost in terms of tokens used for prompting and generation. Token cost increases substantially in Search Tree (ST) and Analysis and Repair (AR), which uses the recursive beam search. § LIMITATIONS Experiments in this paper test a single task and domain. We have performed experiments with TBP for other domains and tasks (e.g. preparing a conference room for a banquet), but not yet with STARS. We do plan to further refine and evaluate STARS across additional tasks and domains. Due to experimental costs (limited GPT budget and human expert time), it was not feasible to run all experiments many times. We did show that the overall variance for STARS is small and has marginal effect on key outcomes (details in Appendix D). LLM Selection is not providing compelling improvement over the baseline (mean log prob) choice strategy. For future work, we will explore improvements to it, especially via the use of additional context that the robot can obtain from the user and (for some contexts) the LLM. § CONCLUSION LLMs present challenges as a sole source of knowledge for an embodied agent due to disconnects between the knowledge encoded in an LLM and a situated robot's specific knowledge needs. STARS enables a robot to more effectively exploit an LLM, mitigating these disconnects. The iterative STARS process shifts the role of the LLM from sole knowledge source to one source within a more comprehensive learning process. It both addresses LLM limitations and takes advantage of the knowledge, reasoning, and online learning capabilities of cognitive agents. Results also highlight that user input is necessary for satisfactory task completion. Thus, a secondary goal of robotic task learning from LLMs should be to reduce the cost of oversight. Our results demonstrate a ∼70% reduction in needed oversight. While future work will seek to further increase quality of LLM extraction and to extend oversight (e.g., context for Selection), some type of user oversight will always be necessary to align learning with user desires and preferences. This work was supported by the Office of Naval Research, contract N00014-21-1-2369. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of Defense or Office of Naval Research. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. Technical Appendix § OBJECTS IN EXPERIMENTS In this section we describe the objects used for the experiments in the paper. Table <ref> shows the 35 objects used in the experiments, including their starting location in the kitchen and goal destination. All 35 objects listed have the property of “grabbable" (can be picked up by the robot). The objects are distributed on the counter, table, and in the dish rack. The goal destinations of the objects are evenly distributed across the recycling bin, garbage, drawer, sink/dishwasher, cupboard, pantry, and refrigerator. There are four duplicate object pairs (highlighted in bold), but due to their differing locations the goal state for each pair of duplicates is different (e.g., the steak knife on the table should be put in the dishwasher, the steak knife in the dish rack should be put in the drawer). For the goal destinations as designed in these experiments, dishes on the table or counter are treated as being dirty (reflecting the preferences of the user). However, some objects on the table must be treated differently. For example, the bottle-opener and cork screw have the goal of being placed directly into a drawer (as these objects are not typically washed after use). Using multiple instances of the same object type and having various different destinations from the same initial location were included in the design to 1) result in more challenging task to learn overall, and 2) evaluate how the LLM reacted to the different contexts. Table <ref> shows the 11 appliances and furniture in the simulated kitchen that serve as the locations and destinations for objects in the experiments. It specifies properties of the objects that relate to what actions can be performed on them (affordances), including surface (objects can be placed on it) and receptacle (objects can be placed in it). It also lists if the objects have the affordance of openable/closeable. Finally it lists the goal state of the objects in the experiment design (e.g., that the ones that can be closed must be closed). § STEP-BY-STEP EXAMPLE OF GOAL ELICITATION/LEARNING PROCESS In this section we describe, with more detail and a running example drawn from the experiments, the complete learning process using the STARS strategy to retrieve a breadth of responses from the LLM (GPT-3), analyze and repair responses, and select from the candidate options (GPT-4). We focus on a single object and the STARS+Oversight condition with no ablations. Figure  <ref> shows the simulation of the kitchen that is filled with objects that need to be tidied. We explore task learning in the context of a simulated office kitchen environment with a mobile robot that is capable of grabbing, moving, and interacting with objects. First we describe initial interaction with the user, where the user gives the task and subtasks to learn, followed by the template-based prompting strategy that is the baseline for this approach and is used to select and instantiate a template for learning task knowledge. Finally we described the component strategies of STARS. §.§ Initial interaction, ITL agent The dialogue used to initiate teaching of the “tidy kitchen” task is listed below. The human provides the task to be learned, followed by subtasks to achieve clearing, storing, and unloading all the objects on the table, counter, and dish rack respectively. Instructor: Tidy kitchen. Robot: What is the next goal or subtask of tidy? Instructor: Repeat the following tasks while an object is on the table. Robot: What is the next goal or subtask of tidy? Instructor: Clear an object that is on the table. Robot: What is the next goal or subtask of clear? The robot then looks for objects that it observes on the table. After selecting an object, the agent attempts to clear the object. If it does not know a goal for this object, the robot initiates the STARS learning processing to acquire a goal description, starting by constructing a prompt for the goal using the context of the task and the object. Once all objects have been cleared from the table, part of the above dialog will be repeated for storing objects that are on the counter, and again for unloading objects from the dish rack. As an example going forward we will use a mug in the dish rack that the robot observes while learning to unload all the objects in the dish rack. The data is taken from the experiment for the STARS+Oversight condition. §.§ Prompt Construction Using template-based prompting, the robot selects a template for learning a task goal, which includes two prompt examples (from other tasks), and instantiates the prompt template with the relevant task context, the overall task “tidy kitchen,” the location of the robot “in the kitchen,” and the object observed “mug in the dish rack.” The initial prompt for the mug in the dish rack constructed by the robot is shown below. This work focuses specifically on retrieving goal knowledge, which enables the robot to search for the steps to achieve the goal. Prior work has shown that, using this strategy, a robot can retrieve action knowledge with 97% accuracy without additional interaction or evaluation by the robot, and therefore does not require the additional strategies proposed in this paper to find the actions needed to achieve a valid goal. Agent-created prompt: §.§ Search Tree To insure a range of responses to choose from, the robot uses beam search to retrieve a tree of responses for a single prompt. For any token with a log probability under 90%, new completions will be generated for alternative tokens that are above 5%. The logprobs GPT setting is set to 5, so four alternative responses will be retrieved from the LLM. This process is recursive (up to a recursion depth of 3). To further limit the recursion and the number of responses generated, we also limited a second recursion of responses to those where the total response generated so far has a mean log probability above 85%. These thresholds were selected after some pilot experiments, and are not tuned to the objects in the experiment data set. Lowering the thresholds results in a much larger space of responses retrieved. First the temperature 0 response for the prompt is retrieved. From the above prompt for the mug the temperature 0 response is: LLM response: For a deeper analysis of the beam search, the tokens of the response to the above prompt are listed below. Tokens in response: Of these tokens only ` dish,' `washer,' ` and,' and ` turned' have log probabilities below the threshold of 90%. The log probabilities for each are shown in Table <ref>, alongside the probabilities for each alternative potential response. Only tokens above the 5% threshold (highlighted in bold) will be expanded in the beam search. The highlighted tokens will be used to generate new completions, continuing the beam search in a recursive fashion. We continue the example with the first alternative token 'cup' instead of 'dish.' The agent sends the prompt again but now with the partial completion “The goal is that the mug is in the cup”. The complete prompt is shown below. Prompt for first level of recursion: The tokens of the response to this prompt are listed below. LLM response tokens for first recursion: Again the relative probabilities of the tokens are examined to continue the beam search. As before, the alternative tokens for tokens below 90% probability are shown in Table <ref>, and only tokens above the 5% threshold (highlighted in bold) will be expanded in the beam search. When Search Tree encounters an alternative token that contains a period, indicating the end of the sentence, such as for `and' above, it returns that completion as a response: “The goal is that the mug is in the cupboard.” Search Tree then continues the beam search recursion by generating a completion where `dish' is used in place of `cup' as shown in the prompt below. Prompt for second level of recursion: The LLM responds with another sequence of tokens: Response for second level of recursion: No further recursion is done beyond this point. A similar process is performed for the other branches of the response `tree.' After expanding the entire tree to this level, the final set of responses retrieved using Search Tree for a mug in the dish rack are sent to the robot for analysis. These responses with their probabilities are listed below. Final list of goals produced by tree search: §.§ Analysis Once the Search Tree process has retrieved a set of high probability responses from the LLM, STARS continues by analyzing each of the candidate responses to detect mismatches and determine which ones are viable for the robot. Each candidate is analyzed to determine if it matches robot's NLP capabilities, embodiment, affordances, and current environment. This analysis is performed through internal simulation, where the robot simulates learning from the response to proactively identify mismatches. The robot's language processor indicates if a sentence can be interpretted and identifies unknown words. It evaluates the results of the language comprehension grounding process to identify any referents in the response that could not be grounded to objects in the environment observable by the robot. Finally the robot detects affordance and embodiment mismatches using it knowledge of objects (from semantic memory) and properties of objects (detected through perception of the env.) by evaluating if the clauses in the response are achievable given its knowledge of affordances. The analysis categorizes responses as viable if they contain no mismatches, and for responses with mismatches identifies the category of mismatch and the specific issue. The viable goals for the mug are listed below. Agent analysis determines the following are viable: The goal responses that the robot determine are unviable are listed below, grouped by the type of mismatch. Uninterpretable responses (Language mismatch): In these cases the robot was not able to interpret these responses. Responses with unknown terms (Language mismatch): The robot does not have a definition of `started' and identifies it as an unknown word. Responses with ungrounded objects (Situation mismatch): There is no cabinet in the kitchen that the robot can observe, so it fails to ground the referent of cabinet to an object. Responses with an affordance mismatch (Embodiment/affordance mismatch): For the affordance mismatches, the robot detects an affordance violation for the dish rack being empty because its affordance knowledge for empty relates to objects that can be filled with a liquid (e.g. a water pitcher) and it does not have the fillable affordance for the dish rack. The dish rack is also not an object that the robot is capable of grabbing or moving, so it identifies an affordance mismatch that the rack is not grabbable. §.§ Repair Given the results of Analysis, the Repair strategy of STARS attempts to repair the detected mismatches by prompting the LLM again. It will attempt to repair three types of mismatches: ungrounded objects, unknown words, and affordance mismatches. For each type of mismatch the robot has a prompt template that it can instantiate that contains an example of repairing that type of mismatch (for another task). Otherwise the prompt template is the same as was used for the initial prompt (as seen in ST). The offending, mismatched responses is appended onto the prompt, followed by a response from the robot indicate the mismatch to repair. Below we continue the learning process for the mug in the dish rack, by showing the repairs performed on the responses for each of the types of mismatch. Repairing an ungrounded object: The first response the robot tries to repair is the response with an ungrounded object, cabinet, that the robot could not perceive in its environment. The robot selects a prompt template for repairing ungrounded object that includes an example of the repair. This prompt example, for an ungrounded shelf, can be seen at the beginning of the prompt below. The prompt is instantiated as before, but now with the mismatched response appended followed by the response from the robot indicating the mismatch to repair: “No. Cannot see a cabinet.” Prompt: This initial temperature 0 response from this prompt is listed below. Repair LLM Response: STARS doesn't just retrieve a single response it uses the beam search strategy from Search Tree to retrieve a set of responses to the repair as before. We won't step through the process again, as it is the same as before. The final responses generated from this repair prompt are shown below. Some of them are duplicates with responses already generated. Note that none of these responses refer to a cabinet anymore. Final output for repair of ungrounded cabinet: Repairing unknown terminology: STARS continues by repairing another response, with a different mismatch, a response with an unknown word. In this response, shown below, the robot does not know the word "started". As before, the robot selects a template for repairing unknown terms, containing an example of an unknown term repair (shown below), and instantiates it with the relevant task context, the mismatched response, and the robot's repair response: “No. Unknown word started.” The prompt is shown below. Prompt: As before, this prompt is used to generate a set of responses using the ST beam search, producing the goal descriptions listed below. Note that the repaired responses no longer contain “the dishwasher is started” and contains other terms to describe the state of the dishwasher. Final output for repair of unknown word started: In this case all these results are duplicates of ones found previously. Repairing an affordance mismatch: Next the robot performs a repair for a response (shown below) with an affordance mismatch. In this case, the dish rack is not grabbable and therefore cannot be put into the cupboard. The same process as before repeats, STARS selects a prompt template with an affordance repair example (shown below), instantiates with the task context, and provides the mismatched response and the robot's direction to repair the response: “No. Rack is not grabbable.” This prompt can be seen below. Prompt: Performing tree retrieval using this prompt results in a pair of goal descriptions that do not have the affordance mismatch. Final output for repair of affordance mismatch: The responses generated through repair will be analyzed again by the robot to determine if they are viable, or if they contain mismatches. The robot will attempt to repair mismatched responses generated from a repair again. It will not attempt to repair a response for a third time; there needs to be some limit to prevent the robot from making continual repair prompts. STARS detects duplicates before sending them for analysis to the robot so multiple repairs will not be attempted on duplicate responses. §.§ Selection After performing Search Tree, Analysis, and Repair, the robot has generated a set of viable response for goal descriptions for the task to tidy a mug in the dish rack. These responses, ordered by mean log probability, are listed below: Viable goal responses for a mug in the dish rack ordered by mean log probability Now the robot uses the LLM (in this case GPT-4) to select responses from the viable options by constructing a new prompt. It uses the selection prompt template and instantiates it with the candidate options and relevant task context. The prompt and the response from GPT-4 using the LLM selection strategy are shown below. A small example prompt of this selection is presented in the beginning of the prompt (one-shot prompting). The prompt solicits a single token response, after “Answer: ”, from the LLM for an integer indicating which of the responses is the best. The options are presented in order by their mean log probability (lowest to highest). (GPT-4 appears to have a small bias toward selecting the most recently presented option, hence this ordering which biases towards the higher probability responses). The order of options varies slightly between runs due to differences in the mean log probability calculated by the LLM. However, even with the temperature set to 0 and the same ordering of the same set of goals, there is occasional variance in the response. Example selection prompt, including a prompt example The response from GPT-4 (Temperature=0): The response from the prompt for LLM selection chooses “The goal is that the mug is in the cupboard and the cupboard is closed" as the best response for the goal for the mug in the dish rack. Without oversight the robot would select this goal description to learn from. In this case this is the correct goal, and shows benefit over the base line template-based prompting strategy of using the mean log probability, which would have selected an incorrect response of “The goal is that the mug is in the dish rack.” §.§ Oversight To achieve the requirement of learning situational relevant knowledge, we need to be sure that the goal for each specific object conforms to the preferences of the human user. STARS has produced a list of candidate goals, and used the LLM to select a preferred candidate. Neither the LLM nor the robot has knowledge of the preferences of this particular user in this particular selection, so confirmation by the user is required. To achieve this, the goal selected by STARS is now offered to the human for confirmation using this dialog: Robot: [LM] For a mug in the dish rack is the goal is that the mug is in the cupboard and the cupboard is closed? Instructor: yes. In this case the human responded in the affirmative. If the human responded negatively, the LLM Selection process would repeat, but with option 5 removed. This process repeats until the human confirms a goal as correct, the options produced from the LLM are exhausted, or the human is asked to confirm 5 different goal responses. Once these are exhausted, or the limit of questions is reached, the human is asked to describe the goal. This strategy of only asking the human for yes/no confirmations instead of asking for complete goal descriptions substantially reduces the amount of words required from the human to get 100% task completion, as shown by our experimental results. § ADDITIONAL DATA ANALYSIS FROM EXPERIMENTS In this section, we present and describe in more detail the experimental results outlined in the main body of the paper. Table <ref> presents an extended summary of the data presented in the main body of the paper. The columns of the table are: * Condition: The experimental condition. * Task Completion Rate: The fraction of the task completed by the agent in the condition. In the experiment, there are 35 objects with a desired final location (see Table <ref>) and 5 kitchen locations with a desired final state (such as “refrigerator door closed"; see Table <ref>). Task completion rate is computed as the fraction of these 40 assertions that match the desired final state. * Retrieved goals: The total number of goals generated by the LLM. A retrieved goal is produced an invocation of Template-based Prompting (baseline conditions) or Search Tree (STARS conditions, including use of Search Tree in Analysis and Repair). * Proposed Goals: The total number of goals presented (“proposed" as an option) to the user in the oversight conditions. * Sourced Goals: The number of proposed goals that are actually used (or “sourced") by the robot. When the agent can recognize that a goal is unviable, it does not attempt to use that goal, which explains why some non-oversight conditions have less than 35 goals. In addition, for TBP+0, only 21 goals could be sourced (meaning that the user had to provide descriptions for 14 of the objects in the kitchen). * Total prompt tokens: The total number of tokens sent to a LLM for the condition. Total tokens includes tokens sent for both Search Tree (including ST under AR) and Selection. * Total completion tokens: The total number of tokens received from the LLM for the condition. * Total tokens: The sum of total prompt tokens and completion tokens. * Total instructions: The total number of instructions provided to the robot for that condition. In the non-oversight (as well as oversight conditions), the user provides some initial instructions (tidy kitchen by clearing the table, etc.) as well as confirmation of the completion of tasks, resulting in a floor of 14 instructions. On the oversight conditions, total instructions includes any goal descriptions that the user provides (“the goal is that the steak knife is in the dishwasher") as well as confirming/disconfirming feedback (Agent: “Is that goal that the steak knife is in the cupboard?". User: “No.") * Total Yes/No Instructions: The number of yes/no feedback responses provided by the user in the oversight conditions. * Total user words: The total number of user words provided to the robot for that condition during the experiment. Using the examples under “Total Instructions," the goal description is 11 words and the yes/no question would be a single word for those instructions. Figure <ref> presents an expanded summary of key results from Figure <ref>. Task completion, total number of instructor words, and fraction of accepted yes/no responses are discussed in the main body of the paper. * Total number of instructions: Similar to total number of instructor words, total number of instructions decreases in the STARS oversight condition in comparison to template-based prompting. 65 interactions are needed. However, 51 of these interactions are proposed goals that require yes/no responses and 35 of these are accepted (68% acceptance rate, as in the lower right chart). Note that in the STARS+O condition, there was at least one acceptable goal condition generated by the LLM for each object in the data set. * Number of Retrieved Goals: This chart compares how many goal descriptions are retrieved from the LLM. In the TBP conditions, relatively few goal descriptions are produced (9̃0, or about 2.6 descriptions/object). With the ST conditions, many more goals are retrieved (2̃45) due to beam search. In the STAR+ conditions, about 365 goals are retrieved. The increase of about 120 goal retrievals represents the additional LLM retrievals being performed by beam search as part of Analysis and Repair. * Total Goals Presented to User: This chart illustrates the number of retrieved goals presented to the user (both charts share the same horizontal axis). In the TBP+O condition, 64 of the 89 retrieved goals are presented to the user (and only 21 are eventually used by the robot). In the STARS+O condition, slightly fewer goals are presented (51) from the total of 361 goals retrieved and at least one goal is used for each object (35 sourced goals). This result highlights the while the retrieval process is much broader for STARS than for TBD, the search and evaluation processes result in greater overall precision in identifying acceptable goal descriptions, requiring fewer user evaluations and a higher acceptance rate when a goal does need to be confirmed. Figure <ref> shows the trade off between the costs (words and tokens) and performance (task completion) and highlights the relative contributions of the components of the STARS strategy. Search Tree has the largest impact on token cost, but the benefits in performance are not observed until adding Analysis and Repair (AR) that down-selects from the now larger space of responses. The figure also shows that STARS greatly reduces the human cost in words (while increasing token costs), and it reinforces that Selection doesn't have an appreciable impact on performance. Figure <ref> shows for each condition, the number of objects (out of 35) for which the robot retrieved at least one situationally relevant response from the LLM. While only retrieving situationally responses for 15 objects in the baseline, STARS results in 100% of the objects having situationally relevant responses, largely due to the Search Tree and Analysis and Repair. This chart illustrates that the STARS strategy is successful at generating situationally relevant responses from the robot, even if those responses are not always selected first by the robot. Figure <ref> shows the token cost (from prompts and generation) for each experimental condition, showing the tokens used per object (left) and the tokens used for each prompt type. Some objects, particularly in the conditions with analyze and repair, result in many more tokens being used. The types of prompts (in order left to right) include the initial prompt, recursive (prompts used for the Search Tree beam search), repair (prompts using during Analysis and Repair), repair/recurse (prompt used for beam search during repair), and selection (prompt used for LLM Selection over candidates). Based on the condition, only certain types of prompts are used. Figure <ref> shows the categorization of LLM responses according to viability, reasonableness, and situational relevance for every experimental condition. As outlined in the paper, the distribution of responses in the ST-AR-S conditions are quite similar, in contrast to the baseline conditions (TBP and TBP+O) which reveal a different pattern. The baseline conditions show more situationally relevant responses by percentage, but many fewer responses are retrieved in these conditions. STARS results in an increase in the total number of situationally relevant responses retrieved, at the cost of generating more unviable responses (by percentage) overall. § EXPLORATION OF VARIABILITY As mentioned in the main body of the paper, there is little variation from one run to another of the same condition. This section of the appendix illustrates what variability there is. Because running the experiment is somewhat expensive in time (especially in the oversight conditions) and not trivially inexpensive in the financial costs of LLM use, given the limited variability of the consequent results, we ran conditions for the primary experiment only one time. Table <ref> shows the detailed summary of measures for 10 runs of the STARS condition (no oversight). Two additional lines show the mean and standard deviation for data that varies in this condition. The table follows the format of Table <ref> and the definition of the individual measures are summarized in that table. Because STARS is not an oversight condition, the total number of instructions and total words do not change from run to run. Similarly, no goals are proposed to the user and thus there are no yes/no responses to those proposed goals. Some of these results are also illustrated graphically in Figure <ref>. As these data show, there is little change in overall results from run to run. The Task Completion Rate varies from 75% to 80%, or from 30 to 32 of the 40 state assertions defined for the final desired state. There are even smaller variations (in a relative sense) in the retrieval and token measures. In all 10 conditions, STARS produces a viable goal that is sourced by the robot to execute. While the lack of variability may appear unexpected, it is actually a consequence of the LLM's embedded token probabilities (which are fixed once the LLM is trained) and the experimental design, in which an object's gross location (“plate on the table" rather than a specific location on the table) is used for prompt generation. For any given object that the robot perceives, it will generate an instantiated prompt from the goal-description template using the gross location (“location: table").[In other work, we have explored the effects of the number of examples for few-shot, in-context learning with template-based prompting, as well as analysis of how well particular prompt examples contribute to the four main requirements. However, for this experiment, we used a single, fixed example in all prompt templates, which means that for a given object in a gross location, the prompt will be exactly the same for that object.]
http://arxiv.org/abs/2306.02291v2
20230604075038
3rd Place Solution for PVUW2023 VSS Track: A Large Model for Semantic Segmentation on VSPW
[ "Shijie Chang", "Zeqi Hao", "Ben Kang", "Xiaoqi Zhao", "Jiawen Zhu", "Zhenyu Chen", "Lihe Zhang", "Lu Zhang", "Huchuan Lu" ]
cs.CV
[ "cs.CV" ]
3rd Place Solution for PVUW2023 VSS Track: A Large Model for Semantic Segmentation on VSPW Shijie Chang^1[2]   Zeqi Hao^1[2]   Ben Kang^1   Xiaoqi Zhao^1   Jiawen Zhu^1   Zhenyu Chen^1   Lihe Zhang^1   Lu Zhang^1   Huchuan Lu^1,2 ^1School of Information and Communication Engineering, Dalian University of Technology ^2Peng Cheng Laboratory [2] equal contribution {csj, hzq, kangben, zxq, jiawen, dlutczy}@mail.dlut.edu.cn, {zhanglihe, zhangluu, lhchuan}@dlut.edu.cn =============================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we introduce 3rd place solution for PVUW2023 VSS track. Semantic segmentation is a fundamental task in computer vision with numerous real-world applications. We have explored various image-level visual backbones and segmentation heads to tackle the problem of video semantic segmentation. Through our experimentation, we find that InternImage-H as the backbone and Mask2former as the segmentation head achieves the best performance. In addition, we explore two post-precessing methods: CascadePSP and Segment Anything Model (SAM). Ultimately, our approach obtains 62.60% and 64.84% mIoU on the VSPW test set1 and final test set, respectively, securing the third position in the PVUW2023 VSS track. § INTRODUCTION Pixel-level Scene Understanding is one of the fundamental problems in computer vision, which aims at recognizing object classes, masks and semantics of each pixel in the given image. The 2nd Pixel-level Video Understanding in the Wild (PVUW2023) consists of two challenge tracks, VSS and VPS, which respectively use the VSPW <cit.> and VIPSeg <cit.> datasets as the test sets. We selects the VSS track, which aims to perform pixel-level predictions of pre-defined semantic categories on the VSPW dataset. In recent years, unified macromodels, also known as large-scale models, have emerged as powerful tools in the field of computer vision. These models have demonstrated exceptional performance in fundamental tasks such as image classification, object detection, and image segmentation. The key advantage of unified macromodels lies in their powerful feature modeling capability, allowing them to learn rich and abstract representations from data. By leveraging their large number of parameters and complex network architectures, unified macromodels can capture intricate patterns and high-level semantic information from raw input data. The learned representations can be generalized effectively to downstream tasks, facilitating knowledge transfer and reducing the need for task-specific model architectures. Moreover, deep learning heavily relies on the availability of specialized data for training. The abundance of data plays a vital role in enabling deep models to learn complex representations and generalize well to new examples. Based on the advantages of large models and diverse professional data, we have developed a novel model using Internimgage-H <cit.> as the encoder and Mask2former <cit.> as the decoder. This architecture allows us to take advantage of the powerful feature modeling capabilities of Internimgage-H and the semantic capture abilities of Mask2former. To further enhance the feature modeling and semantic understanding of our model, we have employed a pre-training strategy using the COCO-Stuff <cit.> dataset. Finally, our method obtained 62.60% and 64.84% mIoU on the VSPW test set1 and final test set, respectively, and achieved the third place on the PVUW2023 VSS track. § METHOD In this section, we first present the pipeline of our method. Then, we introduce each component that we have attempted in our approach. Finally, we describe training and inference process of our method. §.§ Pipeline As illustrated in Fig. <ref>, we employ an image-level approach to address VSS, which consists of an encoder, a decoder, test time augmentation and a post-processing stage. Given an input image, our model first utilizes an encoder to extract feature maps. The extracted feature maps are then fed into the decoder to obtain the prediction. Finally, a post-processing method is employed to refine the prediction. We employ Swin Transformer <cit.>, ConvNeXt <cit.> and InternImage <cit.> as the backbone, UperNet <cit.> and Mask2former <cit.> as the segmentation heads. As for post-processing method, we attempt CascadePSP <cit.> and SAM <cit.>. §.§ Encoder Swin Transformer. Swin Transformer <cit.> is a general-purpose backbone for computer vision. Swin Transformer modifies the global self-attention mechanism of Vision Transformer to a shifted-window self-attention, reducing the computational complexity of self-attention. Swin Transformer's impressive performance in downstream tasks like object detection and semantic segmentation positions it as one of the prevailing backbones. Swin-L achieves 53.5% mIoU on ADE20K <cit.>. Due to its high performance, we employ is as the backbone. ConvNeXt. ConvNeXt <cit.> is an open-source model based on convolutional neural networks (CNNs). It extensively conducts ablation study on various techniques of CNNs, while also analyzing and importing techniques from vision transformer models like Swin Transformer. By integrating the strengths of CNNs and the micro design of Swin transformer, it further explores the performance of CNNs and achieves exceptional performance. Unlike Swin Transformer, ConvNeXt is based on CNNs, making it a complementary backbone when compared to Swin Transformer. Therefore, we also utilize ConvNeXt as a backbone to complement Swin Transformer. InternImage. Internimage <cit.> is a new large-scale CNN-based foundation model which effectively scales to over 1 billion parameters and 400 million training images and achieves comparable or even better performance than state-of-the-art ViTs. Different from the recent CNNs that focus on large dense kernels, InternImage takes deformable convolution as the core operator, so that it not only has the large effective receptive field required for downstream tasks such as detection and segmentation, but also has the adaptive spatial aggregation conditioned by input and task information. As a result, InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs. With its powerful object representation capabilities, Internimgage has demonstrated impressive performance on various representative computer vision tasks. For example, InternImage-H has achieved an improvement of 89.6% top-1 accuracy on ImageNet <cit.>, and achieves 62.9% mIoU and 65.4% mAP on the challenging downstream benchmarks ADE20K <cit.> and COCO <cit.>, respectively. Besides, it's worth mentioning that internimgage-H/G is one of the few open-source large models available. We can not only utilize the model structure, but also download pre-trained weights to fine-tune the model on the VSPW dataset. Unfortunately, the best-performing model that composed of InternImage-H and the Mask2former for semantic segmentation are not open-source. As a result, we have to implement it ourselves. §.§ Decoder UperNet. UperNet <cit.> is a multi-task model based on Feature Pyramid Network (FPN), which is widely employed in semantic segmentation. We selected UperNet as the segmentation head due to its simplicity and effectiveness, as well as most backbones are typically used in conjunction with UperNet. In addition, we can easily obtain pre-trained weights of UperNet on other segmentation datasets such as ADE20K and Cityscapes <cit.>. Mask2former. Mask2former <cit.> is a powerful general-purpose segmentation model in recent years with excellent performance on semantic segmentation, instance segmentation and panoramic segmentation. It uses learnbale initialization queries to represent an object, including the things class and stuff class, and updates the queries by transformer's self-attention and cross-attention. We use the pixel-decoder and transformer decoder of Mask2former as our segmentation head. Pixel-decoder performs full semantic capture on the feature map output from the backbone network to construct multi-scale features and a large resolution mask feature with rich semantic information. In the transformer decoder, the learnable query interacts with the multi-scale feature through cross-attention and further enriches the semantic information through self-attention to finally update the learnable query. Finally, the output of the category is obtained by classifying the learnable query, and the output of the mask is obtained by its dot product with the mask feature. We choose Mask2former because it has been proven to achieve superior semantic segmentation performance compared to UperNet. However, the pre-trained weights of Mask2former that utilizes large-scale backbones are not open-source. §.§ Post-processing CascadePSP. CascadePSP <cit.> is a general cascade segmentation refinement model which can refine any given input segmentation predictions. It can boost the performance of segmentation models without finetuning. Although trained on low-resolution datasets, CascadePSP can be used to produce high-quality and very high-resolution segmentation results. CascadePSP splits the segmentation prediction by class into binary predictions, refines each binary prediction individually, and then reassembles them to obtain refined prediction results. The success of CascadePSP on semantic segmentation datasets lead us to choose it as our post-processing method. SAM. Segment Anything Model (SAM) <cit.> produces high quality arbitrary object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. However, SAM lacks the ability to predict semantic categories for each mask. Semantic Segment Anything (SSA) <cit.> proposes a pipeline on top of SAM to predict semantic category for each mask. SSA can boost performance on close-set semantic segmentation based on any segmentation model without fintuning. Due to the remarkable segmentation capabilities of SAM, we choose to use SAM-based SSA as another post-processing method. § EXPERIMENTS In this section, we first introduce the dataset and implementation details. Then, we provide the experiment results to validate the effectiveness backbone and segmentation heads. Finally, we report the final results on the final test set. §.§ Dataset Used ADE20K. Ade20k <cit.> is a comprehensive dataset that provides rich scene annotations, including object instances, object parts, and detailed semantic segmentation labels. The SceneParse150 benchmark which is constructed based on the top 150 object categories ranked by their total pixel ratios within the dataset has indeed emerged as a significant benchmark for the semantic segmentation of images. And in our experiments, we attempt to fine-tune the VSPW dataset using weights pretrained on ADE20K. COCO-Stuff. COCO-Stuff <cit.> augments the popular COCO <cit.> with pixel-wise annotations for a rich and diverse set of 91 stuff classes. It contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled and 164k images. The 80 thing classes are the same as in COCO and the 91 stuff classes are curated by an expert annotator. Since COCO is about complex, yet natural scenes containing substantial areas of stuff, COCO-Stuff enables the exploration of rich relations between things and stuff. Therefore COCO-Stuff offers a valuable stepping stone towards complete scene understanding. So we pretrain our model on the COCO-Stuff dataset to enhance the model's scene understanding capabilities. VSPW. The Video Scene Parsing in the Wild (VSPW) <cit.> dataset is the first multi-scene large-scale video semantic segmentation dataset, covering more than 200 video scenes. At a frame rate of 15 f/s, this dataset is densely annotated with 3536 videos and 251633 semantically segmented frames, covering 124 semantic categories. Among these videos, 2806 videos with 198244 frames is set for training use, while other 343 videos with 24502 frames for validation and 387 videos with 28887 frames for test. Over 96% of the video data in this dataset has a resolution between 720p and 4K. But it also provides the dataset with the resolution of 480P, that is what we use for training and testing. §.§ Implementation details Training strategy. We use an open source classical semantic segmentation framework MMsegmentation <cit.>, as the basic framework to build the model. We also employ the implementation of Mask2former from <cit.>. For experiments of encoder, we train the combinations of Swin-L, ConvNext-XL, InternImage-L, and InternImage-H with UperNet. Except for InternImage-H, we initialize the models with ADE20K pre-training weights. For InternImage-H, we initialize it with both Joint 427M and ADE20K pre-training weights. Additionally, we train Mask2former combined with InternImage-H pre-trained on Joint 427M and COCO-Stuff, respectively. Regardless of which pre-training weights we use, we adopt default training settings with different backbones for training on the semantic segmentation dataset. Due to limitation of computational resource, we adjust the training iterations and batchsize. In addition, since the resolution of the VSPW dataset being 480*853, we use a crop size of 480*853 during training instead of using an equal width and height crop size. Code is available at <https://github.com/DUT-CSJ/PVUW2023-VSS-3rd>. Inference. We use test time augmentation(TTA) such as stochastic flipping, multi-scale data enhancement strategies, and used model ensemble during inference. The scales for multi-scale inference are (0.5, 0.75, 1.0, 1.25, 1.5, 1.75). In the final stage of the challenge, we use CascadePSP and SAM-based SSA as post-processing methods. §.§ Experiment results Results of different backbones with UperNet. Tab. <ref> shows the results of different backbones with UperNet without TTA. We can observe that Internimage-H outperforms Swin-L, ConvNeXt-XL, and Internimage-XL significantly in terms of mIoU. Among them, Internimage-H with UperNet has a parameter count of 1.12B compared to 368M for Internimage-XL with UperNet, indicating that the increase in parameters enhances the model's performance. Results of different decoders and pre-trained strategies. Tab. <ref> shows the results of different decoders and pre-trained strategies with InternImage-H as backbone without TTA. Despite not being trained on semantic segmentation dataset, Mask2former as decoder performs better than UperNet as decoder. We also pre-train the Mask2former model on COCO-Stuff164K, which resulted in a performance improvement of 2.62 on the test set. Tab. <ref> and Tab. <ref> show that larger backbone, more advanced decoder, and increased amount of pre-training data can all enhance model performance. Results of TTA. Tab. <ref> shows the results of TTA. We employ multi-scale flip and model ensemble as TTA approaches. The default scale for multi-scale inference is set to (0.5, 0.75, 1.0, 1.25, 1.5, 1.75), and the default weight for each model in the model fusion is 1. It can be observed that both multi-scale flip inference and model ensemble contribute to performance improvements. However, for the best-performing single model, InternImage-H with Mask2former pre-trained on COCO-Stuff164K, the gains from multi-scale flipped inference are minimal, and ensemble it with other models actually degrades performance. Therefore, we ultimately submit the predictions from best-performing single model with multi-scale and flip inference. Results of post-processing methods. Tab. <ref> shows the results of employing different post-processing methods. Due to time constraints, we are unable to apply post-processing to all well-performing prediction results. We use different prediction results for post-processing, where the initial predictions were obtained from model that uses InternImage-H as backbone and Mask2former pre-trained on COCO-Stuff164K as decoder. CascadePSP uses single-scale prediction as initial prediction, and SSA apply multi-scale flipped prediction at scales of (0.5, 0.75, 1.0). These initial prediction results are not the best-performing ones on the final test set. Therefore, although post-processing yield some performance improvements, it still do not surpass the best-performing results on the final test set. § CONCLUSION Large models have demonstrated excellent performance across various computer vision tasks. During the course of the competition, we explore the performance of different backbones, decoders, and pre-training strategies in semantic segmentation. Experiments indicate that the model with Internimage-H as the backbone, Mask2former as the decoder, and pre-training using COCO-Stuff164K achieves the best results. It shows that larger backbone, more advanced decoder, and increased amount of pre-training data can all enhance model performance. We also explore the use of CascadePSP and SAM as post-processing techniques, and although they do not achieve the best results on the final test set, they are proven to enhance performance. Finally, our approach achieves 64.84% mIoU on VSPW final test set under multi-scale flipped inference and secures the third position in the PVUW2023 VSS track. ieee_fullname
http://arxiv.org/abs/2306.08786v1
20230614234448
Deterministic and Work-Efficient Parallel Batch-Dynamic Trees in Low Span
[ "Daniel Anderson", "Guy E. Blelloch" ]
cs.DS
[ "cs.DS", "cs.DC" ]
Improved Measurements of the IXPE Crab Polarization [ July 31, 2023 =================================================== Dynamic trees are a well-studied and fundamental building block of dynamic graph algorithms dating back to the seminal work of Sleator and Tarjan [STOC'81, (1981), pp. 114-122]. The problem is to maintain a tree subject to online edge insertions and deletions while answering queries about the tree, such as the heaviest weight on a path, etc. In the parallel batch-dynamic setting, the goal is to process batches of edge updates work efficiently in low (n) span. Two work-efficient algorithms are known, batch-parallel Euler Tour Trees by Tseng et al. [ALENEX'19, (2019), pp. 92–106] and parallel Rake-Compress (RC) Trees by Acar et al. [ESA'20, (2020), pp. 2:1–2:23]. Both however are randomized and work efficient in expectation. Several downstream results that use these data structures (and indeed to the best of our knowledge, all known work-efficient parallel batch-dynamic graph algorithms) are therefore also randomized. In this work, we give the first deterministic work-efficient solution to the problem. Our algorithm maintains a dynamic parallel tree contraction subject to batches of k edge updates deterministically in worst-case O(k log(1 + n/k)) work and O(log n log^(c) k) span for any constant c. This allows us to implement parallel batch-dynamic RC-Trees with worst-case O(k log(1 + n/k)) work updates and queries deterministically. Our techniques that we use to obtain the given span bound can also be applied to the state-of-the-art randomized variant of the algorithm to improve its span from O(log n log^* n) to O(log n). § INTRODUCTION The dynamic trees problem dates back to the seminal work of Sleator and Tarjan <cit.> on Link-Cut trees. The problem is to maintain a forest of trees subject to the insertion and deletion of edges, while answering queries about the forest. Examples of supported queries include connectivity (is there a path from u to v, i.e., are u and v in the same tree), the weight of all vertices in a specific subtree, and the weight of the heaviest (or lightest) edge on a path. The latter is a key ingredient in the design of efficient algorithms for the maximum flow problem <cit.>, which was the motivation for their invention. Dynamic trees are also ingredients in algorithms for dynamic graph connectivity <cit.> dynamic minimum spanning trees <cit.>, and minimum cuts <cit.>. There are a number of efficient (O(log n) time per operation) dynamic tree algorithms, including Sleator and Tarjan's Link-Cut Tree <cit.>, Henzinger and King's Euler-Tour Trees <cit.>, Frederickson's Topology Trees <cit.>, Holm and de Lichtenberg's Top Trees <cit.>, and Acar et al.'s Rake-Compress Trees <cit.>. Most of these algorithms are sequential and handle single edge updates at a time. Two exceptions are Tseng et al.'s Batch-Parallel Euler-Tour Trees <cit.> and Acar et al.'s Parallel Batch-Dynamic RC-Trees <cit.>. These algorithms implement batch-dynamic updates, which take a set of k edges to insert or delete with the goal of doing so in parallel. Both of these algorithms achieve work-efficient O(k log(1+n/k)) work, which matches the sequential algorithms (O(log n) work) for low values k, and is optimal (O(n) work) for large values of k. Both of these algorithms, however, are randomized. Other graph problems have also been studied in the parallel batch-dynamic model, such as connectivity <cit.>, minimum spanning trees <cit.>, and approximate k-core decomposition <cit.>. However, to the best of our knowledge, all efficient parallel batch-dynamic graph algorithms are randomized. Indeed, avoiding randomization seems difficult even for some classic static problems. Finding a spanning forest, for instance, has a simple O(m)-time sequential algorithm, and an O(m) work, O(log n) span randomized parallel algorithm has been known for twenty years <cit.>, but no deterministic equivalent has been discovered. The best deterministic algorithm requires an additional α(n,m) factor of work <cit.>. Our results We design a work-efficient algorithm for the batch-dynamic trees problem that is deterministic, n span, and doesn't use concurrent writes. There is a deterministic parallel batch-dynamic algorithm that maintains a balanced Rake-Compress Tree (RC-Tree) of a bounded-degree forest subject to batches of k edge updates (insertions, deletions, or both) in O(k log(1+n/k)) work and O(log(n) log^(c) k) span[The notation log^(c) n refers to the repeated/nested logarithm function, i.e., log^(1) n = log n and log^(c) n = log( log^(c-1) n ). E.g., log^(2) n = loglog n.] for any constant c. The resulting RC-Tree is amenable to all existing query algorithms for parallel RC-Trees and hence it can solve batch connectivity queries, batch subtree queries, batch path queries, batch LCA queries and non-batched diameters, centers and medians <cit.>. There are deterministic parallel batch-dynamic algorithms for * Dynamic forest connectivity * Subtree sums of an associative commutative operation over vertices or edges * Path sums of an invertible associative commutative operation over vertices or edges * Path minimum- or maximum-weight edge * Lowest common ancestors running in O(k log(1+n/k)) work and O(log n) span for batches of k queries. We start by developing a simpler algorithm that is work efficient but runs in O(log n loglog k) span, then, in Section <ref>, we discuss how to optimize the span. As a byproduct of our span optimization, we also optimize the randomized variant of the problem and obtain the following improved result, which improves over the O(log n log^* n) span of Acar et al. <cit.>. There is a randomized parallel batch-dynamic algorithm that maintains a balanced Rake-Compress Tree (RC-Tree) of a bounded-degree forest subject to updates of k edges in O(k log(1+n/k)) work and O(log(n)) span. Randomized parallel batch-dynamic trees have already been used as key ingredients in several other parallel batch-dynamic algorithms. As an additional result, existing algorithms for fully dynamic connectivity <cit.> and incremental minimum spanning trees <cit.> can be derandomized by using our deterministic RC-Tree, assuming that the underlying graph is ternarized (transformed to constant degree <cit.>). Details are given in the appendix. There is a deterministic parallel batch-dynamic algorithm which, given batches of k edge insertions, deletions, and connectivity queries on a bounded-degree graph processes insertions and deletions in O(k log n log(1 + n/Δ) α(n,m)) amortized work and O( n) span, and answers queries in work and O( n) span, where Δ is the average batch size of all deletion operations. There is a deterministic parallel batch-incremental algorithm which maintains the minimum spanning forest of a weighted undirected bounded-degree graph given batches of k edge insertions in O(k log (1 + n/k) + k loglog n) work and O( n) span. Overview Previous implementations of RC-Trees are all based on applying self-adjusting computation to randomized parallel tree contraction <cit.>, i.e., a static tree contraction algorithm is implemented in a framework that automatically tracks changes to the input values, and selectively recomputes procedures that depend on changed data. This results in the RC-Tree that would have been obtained if running the static algorithm from scratch on the updated data. In our case, we instead describe a direct update algorithm that is not based on self-adjusting computation. We describe a variant of tree contraction that deterministically contracts a maximal independent set (MIS) of degree one and two vertices each round. When an update is made to the forest, we identify the set of affected vertices (this part is similar to change propagation) and then greedily update the tree contraction by computing an MIS of affect vertices and updating the contraction accordingly. The key insight is in correctly establishing the criteria for vertices being affected such that the update is correct, and bounding the number of such vertices so that it is efficient. § PRELIMINARIES §.§ Model of computation Parallelism We analyze algorithms in the shared-memory work-span model with fork-join parallelism. A procedure can fork a set of procedures to run in parallel and wait for forked procedures to complete with a join. The work of the algorithm is the total number of instructions performed, and the span (also called depth or parallel time) is the length of the longest chain of sequentially dependent instructions. An alternative model is the classic PRAM model, which consists of p processors that execute in lock step with access to a shared memory. An algorithm that runs in t steps on a p-processor PRAM performs O(pt) work in O(t) span. Our algorithms are naturally described as fork-join programs, but we will use many classic PRAM algorithms as subroutines. Concurrency A parallel computation may be endowed with the ability to read/write the shared memory concurrently. In the classic PRAM model, algorithms are classified into one of three categories: EREW (Exclusive-Read Exclusive-Write, i.e., no concurrency allowed at all), CREW (Concurrent-Read Exclusive-Write, i.e., concurrent reads are allowed but concurrent writes are prohibited), and CRCW (Concurrent-Read Concurrent-Write, i.e., concurrent writes are permitted). The CRCW model may be further subdivided by the behaviour of concurrent writes. The Common CRCW PRAM permits concurrent writes but requires that all concurrent writes to the same location write the same value, else the computation is invalid. The Arbitrary CRCW PRAM permits concurrent writes of different values and specifies that an arbitrary processor's write succeeds, but the algorithm may make no assumption about which processor succeeds. Lastly, the Priority CRCW PRAM assigns fixed unique priorities to each processor, and specifies that the highest priority processor's write succeeds. §.§ Colorings and maximal independent sets Parallel algorithms for graph coloring and maximal independent sets are well studied <cit.>. Our algorithm uses a subroutine that finds a maximal independent set of a collection of chains, i.e., a set of vertices of degree one or two. Goldberg and Plotkin <cit.> give an algorithm that finds an O(log^(c)(n))-coloring of a constant-degree graph in O(c· n) work and O(c) span in the EREW model. This gives a constant coloring in O(n log^* n) work and O(log^* n) span. Given a c-coloring, one can easily obtain a maximal independent set as follows. Sequentially, for each color, look at each vertex of in parallel. If a vertex is the current color and is not adjacent to a vertex already selected for the independent set, then select it. This takes O(c · n) work and O(c) span, hence a constant coloring yields an O(n) work and constant span algorithm. For any choice of coloring, the above algorithms do not yield a work-efficient algorithm for maximal independent set since it is not work efficient to find a constant coloring, and not work efficient to convert a non-constant coloring into an independent set. To make it work efficient, we can borrow a trick from Cole and Vishkin. We first produce a log^(c) n coloring in O(1) time, then bucket sort the vertices by color, allowing us to perform the coloring-to-independent-set conversion work efficiently. Cole and Vishkin <cit.> show that the bucket sorting can be done efficiently in the EREW model. This results in an O(n) work, O(log^(c) n) span algorithm for maximal independent set in a constant-degree graph. There exists a deterministic algorithm that finds an MIS in a constant-degree graph in O(n) work and O(log^(c) n) span for any constant c in the EREW model. §.§ Filtering and compaction A filter is a fundamental primitive used in many parallel algorithm. Given an input array A of length n and a predicate p, a filter returns a new array consisting of the elements of A that satisfy the predicate. Filtering can easily be implemented in linear work and O(log n) span. This span however is a bottleneck for many applications, so approximate compaction was developed as an alternative. Approximate compaction takes the same input as a filter, such that if there are m elements that satisfy the predicate, the output is an array of size O(m) containing the elements that satisfy the predicate, and some blank elements. Goldberg and Zwick <cit.> give a deterministic approximate compaction algorithm that is work efficient and runs in O(loglog n) span in the Common CRCW model. There exists a deterministic algorithm that performs approximate compaction on an array of n elements in O(n) work and O(loglog n) span in the Common CRCW model. §.§ Parallel tree contraction Tree contraction is a procedure for computing functions over trees in parallel in low span <cit.>. It involves repeatedly applying rake and compress operations to the tree while aggregating data specific to the problem. The rake operation removes a leaf from the tree and aggregates its data with its parent. The compress operation replaces a vertex of degree two and its two adjacent edges with a single edge joining its neighbors, aggregating any data associated with the vertex and its two adjacent edges. Rake and compress operations can be applied in parallel as long as they are applied to an independent set of vertices. Miller and Reif <cit.> describe a linear work and O(log n) span randomized algorithm that performs a set of rounds, each round raking every leaf and an independent set of degree two vertices by flipping coins. They show that it takes O(log n) rounds to contract any tree to a singleton with high probability. They also describe a deterministic algorithm but it is not work efficient. Later, Gazit, Miller, and Teng <cit.> improve it to obtain a work-efficient deterministic algorithm with O(log n) span. These algorithms are defined for constant-degree trees, so non-constant-degree trees are handled by converting them into bounded-degree equivalents, e.g., by ternerization <cit.>. §.§ Rake-Compress Trees (RC-Trees) The process of tree contraction can alternatively be viewed as a clustering of the underlying tree. A cluster is a connected subset of edges and vertices of the tree. The base clusters are the individual edges and vertices of the tree. A cluster may contain an edge without containing the endpoints of that edge. Such a vertex is called a boundary vertex of the cluster. Every cluster has at most two boundary vertices. To form a recursive clustering from a tree contraction, we begin with the base clusters and the uncontracted tree. On each round, for each vertex v that contracts via rake or compress (which remember, form an independent set), we identify the set of clusters that share v as a common boundary vertex. These clusters are merged into a single cluster consisting of the union of their contents. We call v the representative vertex of the resulting cluster. The boundary vertices of the resulting cluster will be the union of the boundary vertices of the constituents, minus v. Clusters originating from rake operations have one boundary vertex and are called unary clusters, and clusters originating from compresses have two boundary vertices and are called binary clusters. Since each vertex contracts exactly once, there is a one-to-one mapping between representative vertices of the original tree and non-base clusters. An RC-Tree encodes this recursive clustering. The leaves of the RC-Tree are the base edge clusters of the tree, i.e., the original edges (note that the base cluster for vertex v is always a direct child of the cluster for which v is the representative, so omitting it from the RC-Tree loses no information). Internal nodes of the RC-Tree are clusters formed by tree contraction, such that the children of a node are the clusters that merged to form it. The root of the RC-Tree is a cluster representing the entire tree. Queries are facilitated by storing augmented data on each cluster, which is aggregated from the child clusters at the time of its creation. Since tree contraction removes a constant fraction of the vertices at each round, the resulting RC-Tree is balanced, regardless of how balanced or imbalanced the original tree was. This allows queries to run in O(log n) time for single queries, or batch queries to run in O(k log(1+n/k)) work and O(log n) span. An example of a tree, a recursive clustering induced by a tree contraction, and the corresponding RC-Tree are depicted in Figure <ref>. § ALGORITHMS §.§ The data structure Our algorithm maintains a contraction data structure which serves to record the process of tree contraction beginning from the input forest F as it contracts to a forest of singletons. The contraction data structure also contains the RC-Tree as a byproduct. For each round in which a vertex is live, the contraction data structure stores an adjacency list for that vertex. Since the forests have bounded degree t, each adjacency list is exactly t slots large. Each entry in a vertex v's adjacency list is one of four possible kinds of value: * Empty, representing no edge * A pointer to an edge of F adjacent to v * A pointer to a binary cluster for which v is one of the boundary vertices. This represents an edge between v and the other boundary vertex in the contracted tree. * A pointer to a unary cluster for which v is the boundary vertex. This does not represent any edge in the contracted tree, but is used to propagate augmented data from the child to the parent. At round 0 (before any contraction), the adjacency list simply stores pointers to the edges adjacent to each vertex. At later rounds, in a partially contracted tree, some of the edges are not original edges of F, but are the result of a compress operation and represent a binary cluster of F (Case 3), which may contain augmented data (e.g., the sum of the weights in the cluster, the maximum weight edge on the cluster path, etc, depending on the application). Additionally, vertices that rake accumulate augmented data inside their resulting unary cluster that needs to be aggregated when their parent cluster is created (Case 4). Acar et al. <cit.> describe many examples of the kinds of augmented data that are used in common applications. Clusters contain pointers to their child clusters alongside any augmented data. Each composite cluster corresponds uniquely to the vertex that contracted to form it, so counting them plus the base edge clusters, the RC-Tree contains exactly n + m clusters. If the user wishes to store augmented data on the vertices, this can be stored on the unique composite cluster for which that vertex is the representative. §.§ The static algorithm We build an RC-Tree deterministically using a variant of Miller and Reif's tree contraction algorithm. Instead of contracting all degree-one vertices (leaves) and an independent set of degree two vertices, we instead contract any maximal independent set of degree one and two vertices, i.e., leaves are not all required to contract. The reason for this will become clear during the analysis of the update algorithm, but essentially, not forcing leaves to contract reduces the number of vertices that need to be reconsidered during an update, since a vertex that was previously not a leaf becoming a leaf would force it to contract, which would force its neighbor to not contract, which may force its other neighbor to contract to maintain maximality. Such a chain reaction is undesirable, and our variant avoids it. We say that a tree contraction is maximal at some round if the set of vertices that contract form a maximal independent set of degree one and two vertices. A tree contraction is maximal if it is maximal at every round. Let us denote by F_0, the initial forest, then by F_i for i ≥ 1, the forest obtained by applying one round of maximal tree contraction to F_i-1. To obtain a maximal tree contraction of F_i, consider in parallel every vertex of degree one and two. These are the eligible vertices. We find a maximal independent set of eligible vertices by finding an O(loglog n) coloring of the vertices using the algorithm of Goldberg and Plotkin <cit.>, then bucket sorting the vertices by colors and selecting a maximal independent set similar to the algorithm of Cole and Vishkin <cit.>. This takes O(|F_i|) work and O(loglog n) span. To write down the vertices of F_i+1, we can apply approximate compaction to the vertex set of F_i, filtering those which were selected to contract. This also takes O(|F_i|) work and O(loglog n) span <cit.>. As is standard for parallel tree contraction implementations, each vertex writes into its neighbors' adjacency list for the next round. For vertices that do not contract, they can simply copy their corresponding entry in their neighbors' adjacency list to the next round. For vertices that do contract, they write the corresponding cluster pointers into their neighbors' adjacency lists. For example, if a vertex v with neighbors u and w compresses, it writes a pointer to the binary cluster formed by v (whose boundary vertices are u and w) into the adjacency list slots of u and w that currently store the edge to v. The tree has constant degree, so identifying the slot takes constant time and suffers no issues of concurrency. To build the RC-Tree clusters, it suffices to observe that when a vertex v contracts, the contents of its adjacency list are precisely the child clusters of the resulting cluster. Hence in constant time we can build the cluster by aggregating the augmented values of the children and creating a corresponding cluster. If at a round, a vertex is isolated (has no neighbors), it creates a root cluster. §.§ Dynamic updates [th] Batch update A dynamic update consists of a set of k edges to be added or deleted (a combination of both is valid). The update begins by modifying the adjacency lists of the 2k endpoints of the modified edges. Call this resulting forest F_0', to denote the forest after the update. The goal of the update algorithm is now to produce F_i', an updated tree contraction for each level i, using F_i, F_i-1 and F_i-1'. Affected vertices To perform the update efficiently, we define the notion of an affected vertex. A vertex is affected at level i if any of: * it is alive in one of F_i and F_i' but not the other (which means it either contracted in an earlier round originally, but survived later after the update, or vice versa). * it is alive in both F_i and F_i' but has a different adjacency list * it is alive in both F_i and F_i', does not contract in F_i, but all of its neighbors u in F_i that contracted are affected. The first two cases of affected vertices are intuitive. If a vertex used to exist at round i but no longer does, or vice versa, it definitely needs to be updated in round i. If a vertex has a different adjacency list than it used to, then it definitely needs to be processed because it can not possibly contract in the same manner, or may change from being eligible to ineligible to contract or vice versa. The third case of affection is more subtle, and is important for the correctness and efficiency of the algorithm. Suppose an eligible vertex v doesn't contract in round i. Since the contraction forms a maximal independent set, at least one of v's neighbors must contract. If all such neighbors are affected, they may no longer contract after in the updated forest, which would leave v uncontracted and without a contracting neighbor, violating maximality. Therefore v should also be considered in the update. Figure <ref> shows an example scenario where this is important. Note that by the definition, vertices only become affected because they have an affected neighbor either in the previous round or the same round. We call this spreading affection. The algorithm With the definition of affected vertices, the update algorithm can be summarized as stated in Algorithm <ref>. Each level is processed sequentially, while the subroutines that run on each level are parallel. Line <ref> is implemented by looking at each affected vertex of the previous round and any vertex within distance two of those in parallel, then filtering those which do not satisfy the definition of affected. Note that by the definition of affected, looking at vertices within distance two is sufficient since at worst, affection can only spread to neighbors and possibly those neighbors' uncontracted neighbors. Using Goldberg and Zwick's approximate compaction algorithm <cit.>, this step takes linear work in the number of affected vertices and O(loglog n) span. In Section <ref>, we show that the number of affected vertices at each level is O(k), so this is efficient. Line <ref> is accomplished using Lemma <ref>. This finds a maximal independent set of eligible affected vertices in linear work and O(log^(c) n) span, so we can choose c = 2 and use the fact that there are O(k) affected vertices to find the new maximal independent set in O(k) work and O(loglog k) span. Line <ref> is implemented by looking at each affected vertex in parallel and updating it to reflect its new behaviour in F_i'. This entails writing the corresponding adjacent edges into the adjacency lists of its neighbors in round i+1 if it did not contract, or writing the appropriate cluster values if it did. At the same time, for each contracted vertex, the algorithm computes the augmented value on the resulting RC cluster from the values of the children. Correctness We now argue that the algorithm is correct. After running the update algorithm, the contraction is still maximal, i.e., the contracted vertices at each level form a maximal independent set of degree one and two vertices. Call a vertex eligible if it has degree one or two in F_i' and is not adjacent to an unaffected vertex that contracts in F_i. Consider any eligible unaffected vertex v ∈ F_i'. Since it is unaffected it exists in F_i. Suppose v contracts in F_i, then it still contracts in F_i'. We need to argue that no neighbor of v contracts in F_i'. For any unaffected neighbor u, it didn't contract in F_i and hence still doesn't contract in F_i'. If u is an affected neighbor, it is not considered eligible and hence does not contract since v is unaffected and contracted. Therefore none of v's neighbors contract in F_i'. Now suppose v doesn't contract in F_i'. Since it is unaffected it exists and doesn't contract in F_i. Therefore it has a neighbor u ∈ F_i that contracts. If u is unaffected, then u ∈ F_i' and contracts. If u is affected, then v is affected by dependence. Therefore all eligible unaffected vertices v ∈ F_i' satisfy the invariant. Consider any eligible affected vertex v ∈ F_i'. If v has an unaffected contracted neighbor, then v does not contract in F_i', and has a contracted neighbor. Otherwise, v participates in the MIS, and has no prior contracted neighbor. Since the algorithm finds a maximal independent set on the candidates, v either contracts and has no contracted neighbor, or doesn't contract and has a contracted candidate neighbor. Therefore v satisfies the invariant. Together, we can conclude that every eligible vertex satisfies the invariant. Lastly, if a vertex is not eligible, then it satisfies the invariant since it either has degree greater than two and hence can not contract, or it is adjacent to a vertex that contracts. It remains to show that the algorithm is efficient, which we will do in Section <ref>. § ANALYSIS We start by proving some general and useful lemmas about the contraction process, from which the efficiency of the static algorithm immediately follows, and which will later be used in the analysis of the update algorithm. §.§ Round and tree-size bounds Consider a tree T and suppose a maximal indpendent set of degree one and two vertices is contracted via rake and compress contractions to obtain T'. Then |T'| ≤5/6|T| More than half of the vertices in any tree have degree one or two <cit.>. An MIS among them is at least one third of them since every adjacent run of three vertices must have at least one selected, so at least one sixth of the vertices contract. Therefore the new tree has at most 56 as many vertices. The lemma also applies to forests since it can simply be applied independently to each component. Three important corollaries follow that allow us to bound the cost of various parts of the algorithm. Corollary <ref> gives the number of rounds required to fully contract a forest, and Corollary <ref> gives a bound on the number of rounds required to shrink a forest to size n / log n. Lastly, Corollary <ref> gives a bound on the number of rounds required to shrink a tree to size k, which is useful in bounding the work of the batched update and query algorithms. Given a forest on n vertices, maximal tree contraction completely contracts a forest of n vertices in log_6/5 n rounds. Given a forest on n vertices, after performing log_6/5log n rounds of maximal tree contraction, the number of vertices in the resulting forest is at most n / log n. Given a forest on n vertices and some integer k ≥ 1, after performing log_6/5(1+n/k) rounds of maximal tree contraction, the number of vertices in the resulting forest is at most k. By Lemma <ref>, the number of vertices in each round is at most 5/6^ths of the previous round, so the number remaining after round r is at most n (5/6)^r. The three corollaries follow. §.§ Analysis of the static algorithm Armed with the lemmas and corollaries of Section <ref>, we can now analyze the static algorithm. The basic maximal tree contraction algorithm can be implemented in O(n) work and O(log n loglog n) span for a forest of n vertices. The work performed at each round is O(|F_i|), i.e., the number of live vertices in the forest at that round. By Lemma <ref>, the total work is therefore at most ∑_i=0^∞ n (5/6)^i = n ∑_i=0^∞(5/6)^i = 6n. The span of the algorithm is O(loglog n) per round to perform the maximal independent set and approximate compaction operations by Lemmas <ref> and <ref>. By Corollary <ref> there are O(log n) rounds, hence the total span is O(log n loglog n). Section <ref> explains how to improve the span of the algorithm to O(log n log^(c) n) for any constant c. §.§ Analysis of the update algorithm The analysis of the update algorithm follows a similar pattern to the analysis of the randomized change propagation algorithm <cit.>. We sketch a summarised version below, then present the full analysis. Summary We begin by establishing the criteria for vertices becoming affected. Initially, the endpoints of the updated edges and a small neighborhood around them are affected. We call these the origin vertices. For each of these vertices, it may spread its affection to nearby vertices in the next round. Those vertices may subsequently spread to other nearby vertices in the following round and so on. As affection spreads, the affected vertices form an affected component, a connected set of affected vertices whose affection originated from a common origin vertex. An affected vertex that is adjacent to an unaffected vertex is called a frontier vertex. Frontier vertices are those which are capable of spreading affection. Note that it is possible that in a given round, a vertex that becomes affected was adjacent to multiple frontier vertices of different affected components, and is subsequently counted by both of them, and might therefore be double counted in the analysis. This is okay since it only overestimates the number of affected vertices in the end. With these definitions established, our results show that each affected component consists of at most two frontier vertices, and that at most four new vertices can be added to each affected component in each round. Given these facts, since a constant fraction of the vertices in any forest must contract in each round, we show that the size of each affected component shrinks by a constant fraction, while only growing by a small additive factor. This leads to the conclusion that each affected component never grows beyond a constant size, and since there are initially O(k) origin vertices, that there are never more than O(k) affected vertices in any round. This fact allows us to establish that the update algorithm is efficient. The proofs We now prove the afformentioned facts. If v is unaffected at round i, then v contracts in round i in F if and only if v contracts in round i in F'. Unaffected vertices are ignored by the update algorithm, and hence remain the same before and after an update. If a vertex is not affected during round i but is affected during round i+1, we say that v becomes affected in round i. A vertex can become affected in two ways. If v becomes affected in round i, then one of the following is true: * v has an affected neighbor u at round i which contracted in either F_i or F_i' * v does not contract by round i+1, and has an affected neighbor u at round i+1 that contracts in F_i+1. First, since v becomes affected in round i, it is not already affected at round i. Therefore, due to Lemma <ref>, v does not contract, otherwise it would do so in both F and F' and hence be unaffected at round i+1. Since v does not contract, v has at least one neighbor, otherwise it would finalize. Suppose that (1) is not true, i.e., that v has no affected neighbors that contract in F_i or F_i' in round i. Then either none of v's neighbors contract in F_i, or only unaffected neighbors of v contract in F_i. By Lemma <ref>, in either case, v has the same set of neighbors in F_i+1 and F_i+1'. Therefore, since v is affected in round i+1, does not contract in either F_i or F_i', and has the same neighbors in both, it must be in Case (3) in the definition of affected. Therefore, v has an affected neighbor that contracts in F_i+1. Since (1) ⇒ (2), we have that (1) (2) is true. An affected vertex u spreads to v if v was unaffected at the beginning of round i and became affected in round i (i.e., is affected in round i+1) because * v is a neighbor of u at round i, and u contracts in round i in either F_i or F_i', or * v does not contract in round i+1, and is a neighbor of u, which contracts in round i+1. We call Case (1), spreading directly and Case (2) spreading by dependence. Figure <ref> shows two examples of directly spreading affection. Figure <ref> shows an example of spreading affection by dependence. Our end goal is to bound the number of affected vertices at each level, since this corresponds to the amount of work required to update the contraction after an edge update. Let 𝒜^i denote the set of affected vertices in round i. For a batch update of size k (insertion or deletion of k edges), we have |𝒜^0| ≤ 6k. A single edge changes the adjacency list of its two endpoints. These two endpoints might contract in the first round, which affects their uncontracted neighbors by dependence. However, vertices that contract have degree at most two, so this is at most two additional vertices per endpoint. Therefore there are up to 6 affected vertices per edge modification, and hence up to 6k affected vertices in total. Each edge modified at round 0 affects some set of vertices, which spread to some set of vertices at round 1, which spread to some set of vertices at round 2 and so on. We will therefore partition the set of affected vertices into s = |𝒜^0| affected components, indicting the “origin” of the affection. When a vertex u spreads to v, it will add v to its component for the next round. More formally, we will construct 𝒜_1^i, 𝒜_2^i, …, 𝒜_s^i, which form a partition of 𝒜^i. We start by arbitrarily partitioning 𝒜^0 into s singleton sets 𝒜_1^0, 𝒜_2^0, …, 𝒜_s^0. Given 𝒜_1^i, 𝒜_2^i, …, 𝒜_s^i, we construct 𝒜_1^i+1, 𝒜_2^i+1, …, 𝒜_s^i+1 such that 𝒜^i+1_j contains the affected vertices v ∈ A^i+1 that were either already affected in 𝒜^i_j or were spread to by a vertex u ∈𝒜^i_j. Note that it is possible, under the given definition, for multiple vertices to spread to another, so this may overcount by duplicating vertices. A vertex v is a frontier at round i if v is affected at round i and one of its neighbors in F_i is unaffected at round i. If v is a frontier vertex at round i, then it is alive in both F_i and F_i' at round i, and is adjacent to the same set of unaffected vertices in both. If v were dead in both forests it would not be affected and hence not a frontier vertex. If v were alive in one forest but dead in the other, then all of its neighbors would have a different set of neighbors in F_i and F_i' (they must be missing v) and hence all of them would be affected, so v would have no unaffected neighbors and hence not be a frontier. Similarly, consider an unaffected neighbor u of v in either forest. If u was not adjacent to v in the other forest, it would have a different set of neighbors and hence be affected. If a v spreads to a vertex in round i, then clearly v must be a frontier. Our next goal is to analyze the structure of the affected sets and then show that the number of frontier vertices is small. For all i, j, the subforest induced by 𝒜^i_j in F_i is a tree The rake and compress operations both preserve the connectedness of the underlying tree, and Lemma <ref> shows that affection only spreads to neighboring vertices. 𝒜^i_j has at most two frontiers and |𝒜^i+1_j ∖𝒜^i_j| ≤ 4. We proceed by induction on i. At round 0, each group contains one vertex, so it definitely contains at most 2 frontier vertices. Consider some 𝒜^i_j and suppose it contains one frontier vertex u, which may spread directly by contracting (Definition <ref>). If u spreads directly, then it either compresses or rakes in F_i or F_i'. This means it has degree at most two in F_i or F_i', and by Lemma <ref>, it is therefore adjacent to at most two unaffected vertices, and hence may spread to at most these two vertices. Since u contracts, it is no longer a frontier by Lemma <ref>, but its newly affected neighbors may become frontiers, so the number of frontiers is at most two. Suppose u spreads via dependency in round i (Case 2 in Definition <ref>) in 𝒜^i+1_j and contracts in F_i+1. Since u contracts in F_i+1, it has at most two neighbors, and by Lemma <ref>, it is also adjacent to at most two unaffected vertices, and may spread to at most these two vertices. If it spreads to one of them, it may become a frontier and hence there are at most two frontier vertices. If it spreads to both of them, u is no longer adjacent to any unaffected vertices and hence is no longer a frontier, so there are still at most two frontier vertices, and |𝒜^i+1_j ∖𝒜^i_j| ≤ 3. Now consider some 𝒜^i_j that contains two frontier vertices u_1, u_2. By Lemma <ref>, u_1 and u_2 each have at least one affected neighbor. If either contract, it would no longer be a frontier, and would have at most one unaffected neighbor which might become affected and a frontier. Therefore the number of frontiers is preserved when affection is spread directly. Lastly, suppose u_1 or u_2 spreads via dependency in round i. Since it would contract in F_i+1, it has at most one unaffected neighbor which might become affected and become a frontier. It would subsequently have no unaffected neighbor and therefore no longer be a frontier. Therefore the number of frontiers remains at most two and |𝒜^i+1_j ∖𝒜^i_j| ≤ 4. Now define 𝒜^i_F,j = 𝒜^i_j ∩ V^i_F, the set of affected vertices from 𝒜^i_j that are live in F at round i, and similarly define 𝒜^i_F',j for F'. For every i,j we have |𝒜^i_F,j| ≤ 26. Consider the subforest induced by the set of affected vertices 𝒜^i_F,j. By Lemmas <ref> and <ref>, this is a tree with two frontier vertices. The update algorithm finds and contracts a maximal independent set of affected degree one and two vertices that are not adjacent to an unaffected vertex that contracts in F_i. There can be at most two vertices (the frontiers) that are adjacent to an unaffected vertex, and at most four new affected vertices appear by Lemma <ref>, so by Lemma <ref>, the size of the new affected set is |𝒜^i+1_F,j| ≤ 4 + 5/6( |𝒜^i_F,j| - 2 ) + 2 = 26/6 + 5/6 |𝒜^i_F,j| Since |𝒜^0_F,j| = 1, we obtain |𝒜^i+1_F,j| ≤26/6∑_r=0^∞(5/6)^r = 26/6/1 - 5/6 = 26/6/1/6 = 26 Given a batch update of k edges, for every i |𝒜^i| ≤ 312 k By Lemma <ref>, there are at most 6k affected groups. At any level, every affected vertex must be live in either F or F', so 𝒜^i_j = 𝒜^i_F,j∪𝒜^i_F',j, and hence |𝒜^i| ≤∑_j=1^6k( |𝒜^i_F,j| + |𝒜^i_F',j| ) ≤ 6k × 26 × 2 = 312k We conclude that given an update of k edges, the number of affected vertices at each level is O(k). Putting it all together Given the series of lemmas above, we now have the power to analyze the performance of the update algorithm. A batch update consisting of k edge insertions or deletions takes O(k log(1 + n/k)) work and O(log n loglog k) span. The update algorithm performs work proportional to the number of affected vertices at each level. Consider separately the work performed processing the levels up to and including round r = log_6/5(1+n/k). By Lemma <ref>, there are O(k) affected vertices per level, so the work performed up to including level r is O(kr) = O(k log(1 + nk)). By Corollary <ref>, after r rounds of contraction, there are at most k live vertices remaining in F_r or F_r'. The number of affected vertices is at most the number of live vertices in either forest, and hence at most 2k. The amount of affected vertices in all subsequent rounds is therefore at most ∑_i = 0^∞(5/6)^i 2k = 2k/1-5/6 = 12k, and hence the remaining work is O(k). Therefore the total work across all rounds is at most O(k log(1 + nk)) + O(k) = O(k log(1 + nk)). In each round, it takes O(log^(c) k) span to find a maximal independent set of the affected vertices for any constant c, so we can choose c = 2 to match the span of approximate compaction required to filter out the vertices that are no longer affected in the next round. Each round takes O(loglog k) span, so over O(log n) rounds, this results in O(log n log k log k) span. § OPTIMIZATIONS Our static tree contraction algorithm and our basic result on dynamically updating it are work-efficient (O(n) and O(k log(1+n/k)) work respectively) and run in O(log n loglog n) and O(log n loglog k) span respectively. In both cases, there are two bottlenecks to the span: computing a maximal independent set, and performing approximate compaction to remove vertices that have contracted or are no longer affected. Improving the span of the maximal indpendent set is easy since it runs in O(log^(c) n) span for any c, and we can just choose a smaller c (the basic algorithm chose c=2 to match the span of approximate compaction). Therefore, the only remaining bottleneck is the approximate compaction, which we can improve as follows. We first describe a faster static algorithm, which introduces the techniques we will use to improve the update algorithm. We also describe an improvement that eliminates the need for concurrent writes since approximate compaction requires the Common CRCW model. Lastly, we will also show that our span optimization technique can be used to speed up the randomized variant of the algorithm. §.§ A lower span static algorithm The basic static algorithm uses approximate compaction after each round to filter out the vertices that have contracted. This is important, since without this step, every round would take Θ(n) work, for a total of Θ(n log n) work, which is not work efficient. This leads to an O(n) work and O(log n loglog n) span algorithm in the Common CRCW model using the O(loglog n)-span approximate compaction algorithm of Goldberg and Zwick <cit.>. We can improve the span easily as follows by splitting the algorithm into two phases. Phase One Note that the purpose of compaction is to avoid performing wasteful work on dead vertices each round. However, if the forest being contracted has just O(n / log n) vertices, then a “wasteful” algorithm which avoids performing compaction takes at most O(n) work anyway. So, the strategy for phase one is to contract the forest to size O(n / log n), which, by Corollary <ref> takes at most O(loglog n) rounds. This is essentially the same strategy used by Gazit, Miller, and Teng <cit.>. The work of the first phase is therefore O(n) and the span, using approximate compaction, is O( ( loglog n )^2 ) + O( loglog n log^(c) n ) = O( (loglog n)^2 ). Phase Two In the second phase, we run the “wasteful” algorithm, which is simply the same algorithm but not performing any compaction. Since the forest begins with O(n / log n) vertices in this phase, this takes O(n) work and completes in O(log n) rounds. Since the span bottleneck is finding the maximal indepedent set in each round, the span is O(log n log^(c) n) for any constant c. Putting these together, the total work is O(n), and the span is O( (loglog n)^2 ) + O( log n log^(c) n ) = O( log n log^(c) n ). §.§ Eliminating concurrent writes The above optimized algorithm still uses approximate compaction which requires the power of the Common CRCW model. We now breifly describe a variant without this requirement. Phase Two is the same since it performs no compaction, so we just have to improve Phase One. We do so by partitioning the vertices into n / log n groups of size O(log n) by their identifier. Each round, the algorithm simply considers each group and each vertex within each group in parallel. After performing each round of contraction, each group independently filters the vertices that contracted. We do so using an exact filter algorithm instead of approximate compaction, but since each group has size O(log n), the span is still O(loglog n) without requiring concurrent writes. Since there are n / log n groups, each round takes an additional O(n / log n) work, but over loglog n rounds, this amounts to less than O(n) additional work, so the algorithm is still work efficient. At the end of the phase, collect the vertices back into a single group in O(n) work and O(log n) span, then proceed with Phase Two. §.§ A lower span dynamic algorithm The span of the dynamic algorithm is also bottlenecked by the span of approximate compaction, which is used on the affected vertices each round to remove vertices that are no longer affected. We optimize the dynamic algorithm similarly to the static algorithm, by splitting it into three phases this time. Phase One The algorithm will run Phase One for log_6/5(1 + n/k ) rounds. Note importantly that this depends on the batch size k, so the number of rounds each phase runs is not always the same for each update operation. Similarly to the optimized static algorithm without concurrent writes, we attack the problem by splitting the affected vertices into groups. Specifically, we will group the affected vertices into affected components based on their origin vertex as defined in Section <ref>. There are O(k) affected components, each of which is initially a singleton defined by an affected vertex at round 0. In each round, the algorithm processes each affected component and each affected vertex within in parallel. At the end of the round, the newly affected vertices for the next round are identified for each component. Note that there could be duplicates here since it is possible for multiple neighbors of a vertex to spread to it at the same time. To tiebreak, and ensure that only one copy of an affected vertex exists, if multiple vertices spread to the same vertex, only the one with the lowest identifier adds the newly affected vertex to its component. Since the forest has constant degree, this can be checked in constant time. Given the set of affected vertices, new and old, we can then filter each component independently in parallel to remove vertices that are no longer affected in the next round. The critical insight is that according to Lemma <ref>, each affected component has constant size, so this filtering takes constant work and span! Having to maintain this set of k affected components adds an additional O(k) work to each round, but since we run Phase One for only O(log(1 + n/k)) rounds, this is still work efficient. Phase Two According to Corollary <ref>, by the time Phase Two begins, the forest will have contracted to the point that at most k vertices remain. From this point onwards, we use an algorithm very similar to the static algorithm to complete the remaining rounds, and thus split into two more phases. First, we can collect the contents of each of the O(k) affected components back into a single array of O(k) affected vertices. This can be done in at most O(k) work and O(log k) span. Given an array of O(k) affected vertices, we logically partition it into k / log k groups of size O(log k). We then run the basic dynamic update algorithm for loglog k rounds, using a filter algorithm (not approximate compaction) at each round to remove vertices that are no longer affected. The span of this phase is therefore O((loglog k)^2), and costs at most O(k) additional work. Phase Three After completing Phase Two, by Corollary <ref>, there can be at most O(k / log k) vertices alive in the forest, and hence at most twice that many affected vertices (affected vertices may be alive in either the new or old forest). Phase Three simply collects the remaining affected vertices and performs the same steps as Phase One. We create up to O(k / log k) singleton affected components, and then in each round, process each vertex in each component in parallel, then spread to any newly affected vertices. Each affected component remains constant size by Lemma <ref> and the work performed in each round is at most O(k / log k) for O(log k) rounds, a total of O(k) work. Since each affected component is constant size, maintaining them takes constant time. After O(log k) rounds, the forest is fully contracted. In total, at most O(k log(1 + n/k)) additional work is added, so the algorithm is still work efficient. The span of Phase One and Three is dominated by the span of computing the maximal independent set, so the final resulting span is now O( log(1+n/k) log^(c) k + ( loglog k )^2 + log(k) log^(c) k), = O( log n log^(c) k ). §.§ A lower span randomized algorithm With the span optimization above, the bottleneck of what remains is entirely due to the subroutine for finding the maximal independent set, which takes O(log^(c) k). Our optimization essentially removes the span causes by compaction. In the randomized variant of the algorithm <cit.>, the span is O(log n log^* n), where the log^* n factor also comes from performing approximate compaction (which is O(log^* n) when randomization is allowed). In the randomized variant, however, finding the independent set takes constant span rather rather than O(log^(c) k). It works by raking all the leaves, then flipping a coin for each vertex and compressing the vertices that flip heads while its two neighbors flip tails, which happens with 1/8 probability. It can therefore be shown that a constant fraction of the vertices contract on each round, and that the contraction process takes O(log n) rounds with high probability. We can therefore substitute our deterministic maximal independent set with the randomized variant and use Acar et al.'s <cit.> definition of affected vertices to obtain a more efficient randomized algorithm. Their analysis implies that the resulting algorithm is work efficient, running in O(k log(1+n/k)) expected work for a batch of k updates, and in O(log n) span. § CONCLUSION We presented the first deterministic work-efficient parallel algorithm for the batch-dynamic trees problem. We showed that parallel Rake-Compress Trees <cit.> can be derandomized using a variant of parallel tree contraction that contracts a deterministic maximal independent set of degree one and two vertices. Our algorithm performs O(k log(1 + n/k)) work for a batch of k updates and runs in O(log n log^(c) k) span for any constant c. We also applied our techniques to improve the span of the randomized variant from O(log n log^* n) to just O(log n), and showed that other batch-dynamic graph problems can be solved deterministically. Several interesting questions still remain open. Our deterministic algorithm requires O(log n log^(c) k) span, while our improvement of the randomized variant requires just O(log n). Can we obtain a deterministic algorithm with O(log n) span? It seems unlikely that the exact algorithm that we present here could be optimized to that point, since that would imply finding a maximal independent set in O(1) span work efficiently, and the fastest known algorithms run in O(log^* n) span but are not even work efficient. This doesn't rule out using other techniques instead of a maximal independent set, however. The tree contraction needs only to have the property that it contracts a constant fraction of the vertices in any subtree in order to obtain our bounds, so any constant ruling set would suffice if one could compute it in O(1) span. Prior algorithms for deterministic parallel tree contraction are based on Cole and Vishkin's deterministic coin tossing technique <cit.> (which happens to be a subroutine used by our maximal independent set algorithm). It would be interesting to investigate whether this could be used to obtain a more efficient dynamic algorithm. Lastly, can our deterministic RC-Trees be used to derandimize other existing algorithms? abbrv § APPENDIX: DERANDOMIZING DOWNSTREAM RESULTS §.§ Batch-dynamic graph connectivity Acar, Anderson, Blelloch, and Dhulipala <cit.> give a parallel batch-dynamic algorithm for general graph connectivity which runs in O(k log n log(1 + n/Δ)) amortized expected work and O( n) span for an average batch size of Δ. Inside their algorithm, they use a parallel batch-dynamic Euler Tour tree to maintain a set of spanning forests. If the input graph is ternarized so that a Rake-Compress Tree can be used instead, we can derandomize the algorithm by substituting the Euler Tour tree for one. In addition to Rake-Compress trees, the algorithm uses just a few other pieces of randomization. When a batch of edges is inserted, it uses a semisort to group the edges by endpoint. We can instead use a regular sort which takes O(k log n) work to perform this step since that doesn't increase the total work. It also computes a spanning forest of the newly added edges with the existing connected components. This takes O(k α(n,m)) work using the most efficient deterministic algorithm <cit.>, which still fits the work bounds. The deletion algorithm is more complex, and involves performing a spanning forest computation on a set of replacement candidate edges over a series of layers. Since spanning forests cannot be computed work efficiently deterministically, this will incur an overhead of O(α(n,m)) <cit.>. Therefore the algorithm runs in O( k log n log( 1 + n/Δ) α(n,m) ), amortized work and O( n) span, i.e., just a factor of α(n,m) more work than the randomized algorithm. The discovery of a work-efficient deterministic spanning forest algorithm (though this has been open for 20 years) would make it match the work bounds. §.§ Batch-incremental MST Anderson, Blelloch, and Tangwongsan present an algorithm for batch-incremental minimum spanning trees which can insert a batch of k edges in O(k log(1 + n/k)) work and O( n) span. Underneath, their algorithm maintains the MST using a (randomized) RC-Tree. It also however uses Cole, Klein, and Tarjan's linear-work parallel MST algorithm as a subroutine, which there is no known work-efficient deterministic equivalent for. Derandomizing it will therefore also incur a penalty of O(k loglog n) work to pay for the MST using the algorithm of Chong et al. <cit.>. The final work of the algorithm after applying this penalty is therefore O( k log( 1 + n/k) + k loglog n ).
http://arxiv.org/abs/2306.03882v2
20230606173643
Causal interventions expose implicit situation models for commonsense language understanding
[ "Takateru Yamakoshi", "James L. McClelland", "Adele E. Goldberg", "Robert D. Hawkins" ]
cs.CL
[ "cs.CL" ]
Functional repeated measures analysis of variance and its application [ July 31, 2023 ===================================================================== Accounts of human language processing have long appealed to implicit “situation models” that enrich comprehension with relevant but unstated world knowledge. Here, we apply causal intervention techniques to recent transformer models to analyze performance on the Winograd Schema Challenge (WSC), where a single context cue shifts interpretation of an ambiguous pronoun. We identify a relatively small circuit of attention heads that are responsible for propagating information from the context word that guides which of the candidate noun phrases the pronoun ultimately attends to. We then compare how this circuit behaves in a closely matched “syntactic” control where the situation model is not strictly necessary. These analyses suggest distinct pathways through which implicit situation models are constructed to guide pronoun resolution. § INTRODUCTION Language understanding is deeply intertwined with world knowledge. For example, when reading a sentence like “the fish ate the worm,” we can guess that the fish was probably hungrier before eating and that the worm is no longer alive, even though neither property is explicitly mentioned <cit.>. Classical psycholinguistic accounts have suggested that such knowledge enters into language understanding through structured schemas called situation models <cit.> that are dynamically constructed during comprehension. Put succinctly, a situation models is a representations of the web of entities and relations that are implied without being explicitly specified in the literal text <cit.>. Modern large language models (LLMs) exhibit increasingly impressive performance on “commonsense” tasks that seemingly require the use of implicit world knowledge <cit.>, yet it is still not clear precisely how that knowledge is accessed and employed. Recent interpretability work has explicitly probed and traced individual pieces of world knowledge to highly localized regions of the network <cit.>, allowing surgical erasure or editing <cit.>. But the kind of world knowledge represented by a situation model is more implicit and, according to classical theories, constructed on the fly for the task at hand. Models must somehow determine that a particular relational concept is relevant in the first place <cit.>. For example, in <ref> the final piece of context (successful / available) is only able to resolve the pronoun in light of the initial verb phrase “tried to call”. The situation model constructed from an agent Paul trying something raises the possible predicates of success or failure, while a patient George being called raises the possible predicates of busyness or availability. Conversely, Paul's availability and George's success are not at issue and therefore not available as interpretations for the pronoun. There are, of course, many other attributes that may be invoked from a tried to call situation model that are not relevant to resolving the pronoun, involving the kinds of devices used to make calls, the kinds of noises heard when being called, and so on. According to this analysis, successfully deriving the appropriate referent in both sentences requires models to integrate relevant information across disparate parts of the sentence structure. In this paper, we conduct an initial exploration of the hypothesis that LLMs have learned to construct implicit situation models. As a case study, we conduct a fine-grained analysis of a best-in-class transformer model <cit.> on a Winograd-like pronoun disambiguation task <cit.>. Winograd sentences are minimal pairs constructed with the property that resolving an ambiguous pronoun requires situational world knowledge outside the scope of the literal text; critically, for our purposes, the pair of sentences differs only at one site, which causes the interpretation to flip. Controlling for possible confounds, a model that is able to generate sharply different predictions for these pairs can be said to behave as if it has a situation model.[It is important to distinguish this cautious claim of task-specific functional equivalence from the stronger claim that a neural model constructs the same kind of situation models as humans in general.] Although the open models that amenable to causal probing do not yet achieve fully human-like performance on this task, their trajectory of increasing functional capabilities raises an important mechanistic question about how larger networks achieve these gains. Our primary contributions are (1) employing causal probes to identify a sub-circuit of attention heads that are responsible for propagating contextual information, and (2) constructing a set of closely matched controls for Winograd sentences that are resolvable solely using syntactic cues, which we use to validate the specificity of the identified situation model circuit. Overall, we find some exciting preliminary evidence for meaningfully non-overlapping pathways while also highlighting the subtleties of probing situation models on a sentence-by-sentence basis. § RELATED WORK §.§ Implicit world knowledge in LLMs A number of recent studies have examined the extent to which neural language models have acquired implicit schemas about the world <cit.>, proposed auxiliary tasks to improve coherence <cit.>, and probed the internal mechanics by which world knowledge influences downstream predictions <cit.>. A smaller set of studies has focused on classical psycholinguistic phenomena: for example, <cit.> examining relative clause attachment in coreference resolution, and <cit.> examining predictions about which entities are preferred for different verbs. We approach the problem of implicit situation model representations with a more targeted set of causal intervention techniques, tracing the internal flow of subtle contextual cues in Winograd schemas for the first time. §.§ Probing with causal interventions In order to identify interpretable algorithms underlying specific model behaviors, recent studies have employed variants of causal intervention analyses on intermediate representations, targeting syntactic agreement <cit.>, relative clause processing <cit.>, natural language inference and compositionality <cit.>, gender bias <cit.>, sub-word representations <cit.>, and factual knowledge <cit.>. Following <cit.>, we use the term circuit to capture the explanatory construct in such interpretability studies: a computational subgraph of a neural network consisting of a set of units and connections between them that are causally implicated in a behavior <cit.>. For example, <cit.> recently argued that a small circuit of attention heads appears to identify indirect objects, and <cit.> revealed a circuit for solving simple numerical reasoning problems. Building on this family of interchange intervention techniques <cit.>, we decompose each head into its query, key and value sub-components <cit.> to trace the flow of causally important information for solving Winograd sentences. § APPROACH §.§ Dataset construction We began by extracting the subset of the Winograd Schema Challenge (WSC) appearing in SuperGLUE <cit.>, as well as the larger, crowd-sourced Winogrande <cit.> dataset. These datasets contain sentence pairs that differ only at a minimal word or phrase that changes the referent of an ambiguous pronoun earlier in the sentence (see <ref>).[To facilitate comparison across sentences of different lengths, we will refer to the span of text that is manipulated across the two sentences as the “context", the candidate referents as the “options", the pronoun as the “mask" (since it is masked for the prediction task), the verb immediately following the mask as the “verb", and the remaining tokens as the “rest”. For all analyses, we conduct single-token interventions and then aggregate effects within that class.] . Paul tried to call George but wasn't [successful / available]. These sentences are structured such that the disambiguating context only appears near the end, but otherwise have diverse sentence structure (e.g. the context can be any part-of-speech). We call these pairs the “context cue” condition. We then modified these pairs to construct three additional conditions for comparison. In a “context+syntax cue" condition, we changed the plurality of one of the noun phrase options such that the pronoun can be resolved by relying on the number signaled by the verb without necessarily requiring situational knowledge. . They tried to call George but [weren't successful] / [wasn't available]. To remove the availability of world knowledge entirely, we masked out the context span in both sentences to form a “syntax only" condition. . They tried to call George but [weren't / wasn't] . Finally, we generated a control condition using semantically equivalent synonyms; if results in the other conditions truly reflect world knowledge rather than spurious token-specific features, we should not expect to find any effect in this condition. . Paul tried to call George but he [wasn't accessible] / [wasn't available]. In total, we constructed 200 unique pairs of sentences, each of which appears within all 4 conditions (see more examples in <ref>; further details of dataset construction are provided in Appendix A.) §.§ Interchange interventions To interrogate how exactly masked language models achieve context-sensitive predictions from minimal cues, we applied a causal intervention technique <cit.> to map the flow of information from the context word to the masked site where the ultimate prediction is made. Specifically, we used an interchange intervention to swap intermediate representations across the two contexts (see <ref>), and quantified the effect of the intervention on the model's downstream prediction. Given a pair of sentences (s_A, s_B) that differ only at the context token, we mask out the pronoun and score the likelihood of each noun phrase (N_A, N_B) at the masked position. For simplicity, we denote the noun phrases such that N_A is the correct referent for sentence s_A and N_B is the correct referent for sentence s_B. For example, P_θ(N_A|s_A) refers to the likelihood assigned to the (correct) referent N_A at the mask position in sentence s_A, where θ represents the model parameters. In the case of multi-token noun phrases, we masked out all tokens in the phrase and used the average log probability of each token as the score (see Appendix C). Although multiple metrics have been proposed for capturing the effect of an intervention <cit.>, we use the canonical odds ratio, the shift in relative preference for the correct option as a result of the intervention. We first calculate the baseline preference for the correct referent N_A relative to the incorrect referent N_B: y_pre = P_θ(N_A | s_A)/P_θ(N_B | s_A) where P_θ() represents the model's prediction under pre-intervention representations θ. We then measure the same preference after the intervention: y_post = P_θ^+z(N_A | s_A)/P_θ^+z(N_B | s_A) where P_θ^+z() represents the model's prediction using post-intervention representations θ^+z. The odds ratio is then: E = y_pre / y_post (or, on a logarithmic scale, log E = log y_pre - log y_post. Effects are averaged across the two directions of intervention within each sentence pair. Note that log y_base > 0 by definition for pairs where the baseline prediction is correct, but depending on whether the intervention decreases or increases the probability of the correct referent, the causal effect can be positive (indicating the site of the intervention was contributing to the correct prediction) or negative (indicating the site of the intervention was contributing to the incorrect prediction). The odds ratio has some desirable properties compared to other measures like the absolute difference in differences <cit.>. It does not suffer from ceiling or floor effects, and it is a well-understood measure in classical statistics, as deployed in logistic regression. However, it is also insensitive to the absolute values of the probabilities going into the ratio, making it potentially vulnerable to noise in the tails (i.e. when both NPs are very infrequent). We believe the odds ratio is a preferred metric a priori but we hope future work will better elucidate the advantages and disadvantages of different metrics. § RESULTS §.§ Zero-shot performance evaluation There are a multiplicity of different ways of evaluating performance on Winograd sentences. These shifting criteria may be responsible for inflated claims of state-of-the-art performance <cit.>. To set the stage for our causal intervention analyses, we conduct our own stricter zero-shot comparison of recent pre-trained models. Our strict metric requires the correct referent to be assigned higher probability for both sentences, that is, the likelihood ratios must satisfy P_θ(N_A | s_A)/P_θ(N_B | s_A) >1 and P_θ(N_B| s_B)/P_θ(N_A | s_B) >1 By jointly considering both sentences in the pair, this metric better captures context-sensitivity. Note that a context-insensitive model that makes the same prediction for both sentences would receive a score of zero on this metric <cit.>. That is, context-sensitivity is required for the interpretation of a pronoun to be systematically reversed, as required to meet the stricter accuracy criteria. We also consider a more standard but weaker metric that only requires the prediction to shift in the correct direction, even if there is an absolute bias for the incorrect option; chance is 50% for this metric: P_θ(N_A | s_A)/P_θ(N_B | s_A) > P_θ(N_A| s_B)/P_θ(N_B | s_B) <ref> reports the performance of three masked language models (BERT, RoBERTa and ALBERT) at different sizes, along with the auto-regressive GPT-4, and a newly elicited dataset of N=199 human participants (see Appendix B for details). While all models we consider fall well short of human performance, larger and more recent models tend to perform better overall, with the large ALBERT model achieving up to 81.5% on the weak criterion. We also observe, unsurprisingly, that models perform better overall for the syntax cue condition than for the context cue condition and even better when both cues are combined. GPT-4 performs better than other models on the strict criterion, reaching 67.3% with the use of chain-of-thought prompting <cit.> (mean accuracy aggregated across all 400 individual sentences is 82%, compared to 87% human accuracy). Although models like GPT-4 are only available through an API, limiting our ability to explore causal interventions on their internal representations, we may view other models as lying at earlier points on the same scaling trajectory. We report interventions on all open models (<ref>), but we place particular focus on the largest ALBERT model, which achieved the highest zero-shot performance of any open-source model we considered. First, we are interested in examining the circuits underlying successful performance, and the small number of sentences for which they make the correct predictions limit statistical power (e.g. BERT is only correct for 22 sentences under the strong criterion). Second, the ALBERT architecture ties the weights of attention heads across all layers, yielding more interpretable head-wise analyses (i.e. it is meaningful to track the same head k across layers). §.§ Layer-wise information flow To yield insight into failure modes of these models, we would like to develop a more mechanistic understanding of how local information from the context word is algorithmically propagated through other sites in the sentence to ultimately arrive at the correct prediction. In other words, we are interested in probing the transformer circuits that allow such minimal context cues to have such large effects on pronoun resolution. According to classical accounts, situation models must be constructed dynamically, as it is not obvious ahead of time which aspects of the situation will be relevant to interpretation. We begin by considering the effect of coarse-grained layer-wise intervention. For each layer and each token, we replaced its vector representation under one sentence with what it would have been at the same layer and token for the other sentence, and measured the extent to which the output prediction changed (see <ref>). Results are shown only for sentences where the model made “strictly” correct predictions in both the context condition and the syntax condition; effects are similar for cases where the baseline predictions are “weakly” correct (see <ref>). First, as a sanity check, we observe that intervening at the critical context token at early layers dramatically switches the model's prediction (<ref>). This effect begins to decay around layer 9. Meanwhile, intervening at the noun phrase options only yielded a significantly non-zero effect on the model's output starting layer 9, t(57) = 6.1, p<0.05 (correcting for multiple comparisons), while intervening at the masked pronoun and other positions (rest) yielded effects beginning at layers 6 and 7, respectively (p<0.05). Although effects are localized among “rest” tokens in different locations from sentence to sentence, the example in <ref>A reveals early causal effects in the “tried to call” construction and the comma token, which may serve as a neutral site for aggregating phrase-level information[It has been previously observed that a large amount of attention at late layers is focused on punctuation and other special tokens <cit.>. Interestingly, though, we found no systematic effects at other annotated elements, such as final periods or / tokens (<ref>), suggesting that these elements are not consistently involved in the circuit across sentences.] Finally, as predicted, no significant effects were observed when swapping representations on control pairs with synonymous context words (<ref>), indicating that the context-sensitivity observed for Winograd sentences is not purely driven by token-specific sensitivities. Taken together, these layer-wise effects are consistent with pronoun-relevant information remaining localized in the context until intermediate layers, when it begins to pool in other locations and eventually contextualizes the options to guide attention from the prnoun. So far, nothing about these comparisons implicates the construction of an implicit situation model — we may simply be measuring the circuits for pronoun-resolution more generally. To disentangle these possibilities, we consider our “syntax only” control condition, where syntactic agreement information alone is sufficient to make the correct prediction and no contextual information is available. Unlike the “context only” condition, where effects at the rest of the sentence preceded contextualization of the mask and the effect at the options remains stronger than at the mask as late as layer 11 (t(57) = -5.4, p<0.05), the “syntax” condition is strictly dominated by the effect at the mask starting around layer 7 (t(57) = 3.8, p <0.05). This effect is illustrated in <ref>B, where we see a much more localized circuit between the auxiliary verb, where a number agreement cue is provided, and the MASK, where the prediction is generated; effects at other sites are much more muted. Interestingly, in the combined (context+syntax) condition where both cues are available, information at the mask still dominates starting at the same layer as the “syntax” condition (p<0.05; not shown). We found qualitatively similar results for the largest RoBERTa model, but not for the smaller models (see <ref>). §.§ Head-wise causal interventions These coarse layer-wise analysis suggest that global “situational” information may be constructed from the context word, and integrated elsewhere in the sentence, whereas purely syntactic agreement information may be accessed more directly by the masked pronoun. However, it remains unclear exactly how the transformer accomplishes this task at an algorithmic level, and a lot of complexity was potentially hidden in the “rest” category. In this section, we conduct a finer-grained head-wise analysis of the individual components within each layer that are responsible for routing contextual information between tokens (<ref>). We consider four internal components: (1) the final transformation vector produced by the head that gets concatenated back to other heads before being projected back into the residual stream, (2) the query vector that “imports” information into a given source site, (3) the key vector at other sites which matches the query to yield the attention matrix, and (4) the value vector that is “exported” from the target site (see <ref> for a schematic). For ease of interpretability, we will refer to effects of intervening on each of these components in terms of an implied “source → target” pathway. For example, if intervening on the value exported from the context token has the same effect as intervening on the ultimate transformation vector at the options, we loosely say that this is a “context → options” head. The effect of swapping the full transformation vector is our primary measure of how much pronoun-relevant context-sensitivity is introduced into the residual stream by that head at that layer overall, which can then be broken down into its contributing subcomponents. To build intuition, we first depict the complete profile of causal intervention effects for a single head (head 8) across layers and tokens of an example sentence (<ref>A; see <ref> for the same analysis applied to the largest RoBERTa model). Consistent with our coarser layer-wise analysis above, we find that intervening on transformations at the comma token around layer 8 (and a few layers later at the MASK token) significantly interferes with the model's ability to correctly resolve the referent. We then gained further insight through targeted interventions on the sub-components used to calculate the transformation vector. Specifically, we find that intervening on the value vector exposed at the context word (successful) accounts for a significant proportion of the total effect at middle layers. This effect appears to be largely restricted to the context-only condition; causal interventions on the same head in the syntax-only condition (<ref>B) reveal only the more local “verb → mask” path. Moving to a more systematic analysis, we find that a set of 23 heads (less than half of the 64 total heads) show a significant effect of intervening on the transformation for least one layer and one site (p<0.05, after correcting for multiple comparisons; 15 survive at the p<0.01 level; 13 at the p<0.005 level; and 11 at the p<0.001 level). Examining the internal components of each of these attention heads (keys, queries, values) allows us to construct a preliminary computational graph of how contextual cues eventually propagate to the masked site, aggregating over many sentences. <ref> shows the layer-wise profile for five representative heads, each representing a systematic link between a particular set of sites (see <ref> for all heads). Roughly, this graph suggests that the model begins by shifting information from the context site into “neutral” sites throughout the rest of the sentence via head 8 (e.g. value vector at context word, layer 10 : t(57)=6.1, p<0.05) and head 11 (value vector at context word, layer 6: t(57) = 4.9, p<0.05). Shortly thereafter, head 45 exports that information to the two noun phrases (options) that are the possible referents of the pronoun (transformation vector at options, layer 8 : t(57)=6.9,p<0.05) and head 53 (transformation vector at options, layer 11 , t(57) = 7.1, p<0.05). Then, at the final layer of the model, head 60 preferentially attends from the mask to one of the two options, determining the ultimate prediction (key vector at layer 11 options, t(57) = 8.0, p<0.05). §.§ Analysis of context-specificity Critically, although some of these heads are also implicated in the syntax-only condition (see <ref> for full profiles), there are dramatic quantitative differences in the pattern of these heads across layers. The specificity profile is shown for all active heads in <ref>, where specificity is defined with respect to the context vs. syntax comparison: an orange cell indicates `context-specificity' (i.e. selective activation only in the context condition) while a blue cell indicates `syntax-specificity' (i.e. selective activation only in the syntax condition). Broadly, we observe that many heads, including those highlighted above, yield significant context-specific effects at earlier layers, especially at “rest” sites (left panel; see <ref> for an analysis of RoBERTa). Conversely, there are a number of heads that are yield syntax-specific effects at “mask” sites (middle panel). Although this pattern of specificity remains highly exploratory, it suggests that the network may have learned different pathways for pronoun resolution: cues like verb number information are directly exposed to guide the mask token, while contextual information must be integrated with other relevant semantic cues from elsewhere in the sentence. In other words, the latter may require constructing and querying a rudimentary situation model. § DISCUSSION In this paper, we presented a preliminary investigation of the transformer circuits underlying performance on Winograd sentences, where minimal contextual cues must be used to resolve an ambiguous pronoun. We applied fine-grained causal interventions to identify a circuit of attention heads that are responsible for propagating information from the context cue to the possible referents, which appears to be at least partially distinct from the circuit used to propagate agreement cues in our closely matched syntax-only baseline. It still remains to be seen whether the circuits we have identified should be interpreted as evidence for a bona fide situation model. First, it is possible that even on carefully debiased sets of examples, models like ALBERT are still relying on lexical shortcuts. For example, presented with “MASK was tasty” we would prefer the referent pie over the referent boat, knowing nothing about the rest of the sentence. If so, we may be measuring the circuit for those lexical preferences rather than for anything like a situation model (but see Appendix D for an additional analysis suggesting that this kind of phenomenon is unlikely to be driving the observed effects). Second, situation models present a well-known example of the frame problem <cit.> — it is impossible to explicitly enumerate every property in the world and how it is (or is not) related to every other. Hence, any model of interpretation must be “lazy” in some sense, only introducing relations that are relevant for the task at hand (e.g. the concept of availability for the “Paul tried to call George” example). When the task at hand is simply pronoun resolution, it is possible that an extremely minimal situation model may be sufficient; in the longer term, it will be important to explore settings that require richer or less accessible interpretations. Finally, although we focused on implicit situation models constructed internally, it may also be possible to expose situation models more explicitly in larger auto-regressive models through chain-of-thought prompting <cit.> which do not require model internals to be public. § LIMITATIONS Although our single-site interchange interventions provide causal evidence that particular sub-circuits are necessary for a particular downstream behavior, this technique has known limitations addressed by recent Distributed Alignment Search (DAS) approaches <cit.>. First, it will over-count certain “synergies:” when a single effect is jointly produced by the conjunction of multiple heads acting in concert, we will identify all heads as making distinct contributions to the circuit. Second, it will under-count “redundancies:” if there are multiple heads that are individual sufficient to produce the effect, then no single head will be detected as strictly necessary. Ideally, rather than single-site interventions, we would explore all combinations of different heads to find minimal spanning sets that are both necessary and sufficient, but this procedure becomes intractable given the number of heads, requiring more sophisticated optimization-based approaches to find promising sets <cit.>. § ETHICS STATEMENT All existing datasets (SuperGLUE and Winogrande) and models (BERT, RoBERTa, and ALBERT) were employed according to their intended research focus, and our targeted probing dataset is intended to be used for similar purposes in future work. Because it was constructed by a combination of automatic and manual processes by the authors, it contains no additional information that could uniquely identify any individuals. More broadly, we employ causal interventions to evaluate how models perform a challenging commonsense reasoning task, with the aim of building stronger links to classic work in cognitive science and psycholinguistics. However, these causal interventions may pose some risk if used adversarially to tamper with public models or expose private information. Further, the WSC and Winogrande datasets we use to probe situation models have been constructed within specific cultural settings (e.g. by NLP researchers and largely US-based crowd-workers) and are not intended to be universal or representative of situational competency: a wider diversity of culturally-specific stimuli is needed. § ACKNOWLEDGEMENTS We thank Yohei Oseki, Ryo Yoshida, Felix Hill, Devon Wood-Thomas, Aalok Sathe, Claire Bergey, and the Language Computational Cognitive Science Laboratory at UTokyo for thoughtful discussions and comments. This work was supported by the Princeton Data Driven Social Science initiative, a Seed Grant from the Princeton-UTokyo Strategic Partnership and a C.V. Starr Fellowship to RDH. Code & data are available at <https://github.com/taka-yamakoshi/situation-models>. An interactive demonstration of these causal interventions is available at: <https://huggingface.co/spaces/taka-yamakoshi/causal-intervention-demo>. acl_natbib § APPENDIX A: DATASET CONSTRUCTION We began with 70 sentences from the WSC subset of SuperGLUE and 9248 sentences from the portion of the Winogrande train split (see Appendix D for discussion of debiasing procedures). Many of the sentences from Winogrande were singletons, as the other sentence in the pair had been removed in the debiasing procedure, leaving 1361 intact pairs. We then excluded sentences where we could not automatically obtain the index of the context word or options in the tokenized sequence (many of them had subtle differences in the sentence pair aside from the context word, with respect to the choice of words/punctuation, which are likely not intended by the original annotators of Winogrande), leaving 1140 pairs. In order to create a closely controlled “syntax" cue condition, we took the subset where the verb after the pronoun is either an auxiliary (“to be” or “to have”) or in present tense, indicating whether the masked pronoun is singular or plural. These verbs were identified using spaCy (version 3.2.1) <cit.>. We also ensured at this stage that all sentences had the same NP1-NP2-mask-context-verb ordering. This step reduced the number of sentence pairs to 38 and 167 for SuperGLUE and Wingorande respectively. The NPs and the verbs of these extracted sentences were then manually modified to create the agreement cue in the “syntax” condition, while the context words were manually modified to create the synonym condition. We excluded 5 out of 167 sentence pairs at this step, as we were unable to change the plurality of the NPs while preserving the semantic meaningfulness of the sentence (e.g. “James can count all the numbers on his fingers because the [MASK] are few/many.”). Finally, because the interchange intervention requires both sentences in a given pair to have exactly the same overall number of WordPiece tokens (in order to independently swap query, key or value features), we lightly modified the contexts to satisfy this condition. This step did not reduce the number of sentence pairs. The resulting dataset contains 200 sentence pairs each for the context, syntax and context+syntax cue conditions (38 from SuperGLUE and 162 from Winogrande) and 400 sentence pairs for the synonym condition (as we generated a synonym for each individual sentence in the pair). This dataset is available on Github (under MIT License)[<https://github.com/taka-yamakoshi/situation-models>]. § APPENDIX B: ZERO-SHOT EVALUATION DETAILS We used Hugging Face [Apache License 2.0] (version 4.16.2) <cit.> for all masked language models. One of the key features of ALBERT <cit.>, unlike the original BERT <cit.> or the subsequent RoBERTa <cit.>, is that it shares model parameters across layers, which makes it possible to meaningfully compare the same head in different layers. Relative to the next-largest ALBERT model, the model we consider allocates parameters toward increased “width” rather than “depth,” with larger hidden state dimension (4096 vs. 2048) and more attention heads (64 vs. 16) packed into a smaller number of layers (12 vs. 24). Running the entire analysis took  200 hours using 2 GPUs (NVIDIA TITAN X (Pascal)). For a more state-of-the-art reference point, we also examined the performance of the most recent OpenAI GPT-4 model (using the provided snapshot from March 14, 2023). Because GPT-4 is auto-regressive and the disambiguating context in Winograd sentences typically follows the pronoun, we create a prompted variant of the text as suggested by the original proposers of the Winograd challenge <cit.>. That is, we presented the full sentence along with a question: “The trombone did not fit in the suitcase because it was too large. What was too large, the trombone or the suitcase?” and coded the model's free-text responses for matching either of the NP options (or “other” if it did not match either option). We used a temperature of 1 and generated 50 responses for each question to account for sampling variation. For our chain-of-thought experiment, we used the following prompt containing a single example: Finally, because our specific set of Winograd items had not previously been benchmarked against human performance, we elicited free-response judgments from a N=199 human participants. We matched the wording of instructions closely to the prompt we used with GPT-4, and we coded responses in the same way. Each sentence received approximately 20 judgments, we ensured no participant saw more than one sentence from any given Winograd pair, and the order of presentation of the options was counterbalanced (i.e. half of participants saw ”the trombone or the suitcase?” while the other half saw ”the suitcase or the trombone?”). § APPENDIX C: HANDLING MULTI-TOKEN NPS In order to evaluate the model's ability to perform pronoun resolution, we replaced the pronoun with the mask token and compared the log probability of each NP option at the masked site. While this was straightforward in most cases, 28 of the 200 examples contained a multi-token option. We handled these cases by introducing multiple masks at the pronoun. We then calculated an average token-wise score probability in the following way. Suppose the option noun phrase NP consists of 3 tokens (w_k, w_k+1 and w_k+2) starting at the kth site (s_k, s_k+1 and s_k+2), where the pronoun is at the lth site (s_l). We then replaced the pronoun with three mask tokens and calculated the average log probability as 1/3[log P(s_l=w_k | s_l+1, s_l+2=, s_-l=w_-l) +log P(s_l+1=w_k+1 | s_l, s_l+2=, s_-l=w_-l) +log P(s_l+2=w_k+2 | s_l, s_l+1=, s_-l=w_-l) ], where s_-l=w_-l denotes that all sites other than s_l are filled with the corresponding tokens in the original sentence. Taking the average log probability effectively controls for the additional number of tokens. One concern with this method is that, if part of the noun phrase is a functional word (e.g. a determiner like the), it may drag down the observed effect of the entire phrase. While this is not ideal, we confirmed our results are not sensitive to other ways of aggregating across tokens and believe that aggregating across multiple tokens in this way is a conservative choice relative to other methods, like taking the maximum effect across each of the component tokens. § APPENDIX D: CHECKING DATASET BIASES A recurring concern in the literature on Winograd tasks is that many examples can be fully resolved using lexical “shortcuts” that do not require implicit world knowledge. We took several steps to ensure that our results are not explained by these biases. First, we used the debiased train split of Winogrande, which used an approach known as AfLite to filter out examples that could be solved using only the isolated embedding of the context word <cit.>. Second, due to differences in our mask-prediction task and the fine-tuning regime used by , we conducted our own analysis of bias in our smaller dataset. We devised a simple alternative approach to test whether the isolated context embeddings (token representations at layer 0) had lexical preferences for the options. We first calculated correlations (and Euclidean distances) between the embedding vectors at the context and each of the options. If the context is more similar to N_1 (i.e. larger correlation or smaller distance), we say N_1 is predicted as the referent, while if the context is more similar to N_2, we say it predicts N_2. This simple method made correct predictions in 15.5 % (using correlation) or 16.5% (using Euclidean distance) of our 200 sentence pairs. As expected, this performance is significantly worse than the output MASK prediction. More importantly, out of the 63 sentence pairs we consider where correct predictions were made by the contextualized model, only 9 (10) were resolvable using word embeddings, based on correlation (distance). This analysis suggests that while some lexical shortcuts may still exist in our data, these biases are likely not driving our results.
http://arxiv.org/abs/2306.01532v1
20230602133618
A strong comparison principle for the generalized Dirichlet problem for Monge-Ampere
[ "Brittany Froese Hamfeldt" ]
math.AP
[ "math.AP" ]
MeshfreeFigures/ ⌈⌉ ⌊⌋
http://arxiv.org/abs/2306.10593v1
20230618163544
Cosmic GREA from SMBH growth
[ "Juan Garcia-Bellido" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
empty [][email protected] Instituto de Física Teórica UAM-CSIC, Universidad Autonóma de Madrid, Cantoblanco 28049 Madrid, Spain IFT-UAM/CSIC-23-72 General Relativistic Entropic Acceleration (GREA) gives a general framework in which to study multiple out-of-equilibrium phenomena in the context of general relativity, like the late accelerated expansion of the universe or the formation of galaxies and the large scale structure of the universe. Here we study the consequences of mass accretion onto massive Black Holes. We find that a population of Super Massive Black Holes (SMBH) whose mass grows significantly due to accretion can act as a source of entropic acceleration and constitute a significant part of the present acceleration of the Universe. Cosmic GREA from SMBH growth Juan García-Bellido July 31, 2023 ============================ § I. INTRODUCTION Understanding the origin of Dark Matter (DM) and Dark Energy (DE) is one of the fundamental quests of Modern Cosmology. Although their phenomenology is well understood, and the actual values of the parameters that characterize these two contributions to the matter and energy content of the universe can be determined to the level of a few percent, their nature is a total mystery. Primordial Black Holes (PBH) have recently gained a renaissance as a serious contender for all the DM in the universe <cit.>, when generated during the radiation era from large matter fluctuations that seed small and large scale structures. Those PBH can explain a plethora of astrophysical and cosmological phenomena, and have been suggested to explain the unexpected gravitational wave events seen by LIGO/Virgo <cit.>. Moreover, we have recently proposed that for systems in which there is a production of entropy, the laws of thermodynamics must be incorporated into the Einstein equations via a thermodynamic constraint on the variational principle of the matter-gravity action <cit.>. An effective way of doing this is by adding a viscosity term to the energy-momentum tensor. This new term could give rise to an accelerating universe from the quantum entanglement entropy associated to our cosmological horizon <cit.>, and compared with cosmological observations <cit.>. However, other sources of entropy could also contribute to the local expansion of the universe. We pointed out in Ref. <cit.> that in the absence of matter accretion or merging, the conservation of BH entropy cannot account for any cosmic acceleration. However, super massive black holes (SMBH) at the centers of galaxies are surrounded by a massive accretion disk which feeds into the black hole and their masses can grow very fast <cit.>. We consider here the effect that such an early growth in the mass of SMBH has for the entropic force that accelerates the universe. We will conclude that these PBH seeds, which have grown since recombination to become the present SMBH <cit.>, could also be the source for the Dark Energy of the universe. § II. GREA AND THE EINSTEIN EQUATIONS The basic concept here is coarsegraining. Suppose we have a mechanical system that consists of two components: i) a set of slow degrees of freedom described by canonical coordinates with conjugate momenta (q, p), and ii) some fast degrees of freedom coarsegrained as a thermodynamical system characterized by macroscopic quantities, entropy and temperature (S, T). The action is then given by S = 1/2κ∫ d^4x√(-g) R+ ∫ d^4x√(-g) L_m(g_μν, s) , where s is the entropy density and κ = 8π G. The variational principle tells us that δ S = ∫ d^4x(1/2κδ(√(-g) R)/δ g^μν + δ(√(-g) L_m)/δ g^μν)δ g^μν + ∫ d^4x√(-g) ∂ L_m/∂ sδ s . The interaction between the two components is described by a thermodynamical constraint, in the form of the First Law of Thermodynamics, ∂ L_m/∂ sδ s = f_μν δ g^μν, which gives rise to the Einstein field equations extended to out-of-equilibrium phenomena <cit.>, G_μν = R_μν - R g_μν = κ (T_μν - f_μν) ≡κ T_μν . Here f_μν arises from the first law of thermodynamics -dW = - F⃗· dx⃗ = dU + (p - TdS/dV)dV ≡ dU + p̃ dV where we have defined an effective pressure p̃ which reduces to the usual fluid pressure p in the absence of entropy production. This extra component to the Einstein equations can be interpreted as an effective bulk viscosity term of a real (non-ideal) fluid <cit.>, with Θ = D_λ u^λ the trace of the congruence of geodesics, f_μν = ζ Θ (g_μν+u_μ u_ν) = ζ Θ h_μν , such that the covariantly-conserved energy-momentum tensor has the form of a perfect fluid tensor, T^μν = p g^μν + (ρ + p)u^μ u^ν - ζ Θ h^μν = p̃ g^μν + (ρ + p̃)u^μ u^ν , and, imposing the thermodynamic constraint (<ref>), the bulk viscosity coefficient ζ can be written as ζ = T/ΘdS/dV . In the case of an expanding universe, Θ=d/dtln V = 3H and the coefficient becomes ζ = TṠ/(9H^2a^3), see <cit.>, with S the entropy per comoving volume of the Universe. Entropy production therefore implies ζ > 0. Note that the energy-momentum tensor is still diagonal, T^μ_ ν = diag(-ρ, p̃, p̃, p̃), and that the 00 component is unchanged with respect to GR. Only the ij com­ponent has the entropy-growth dependence via p̃. The Raychaudhuri equation <cit.> for geodesic motion (a^μ = u^ν D_ν u^μ = 0) in the absence of shear (σ_μν σ^μν=0) and vorticity (ω_μν ω^μν=0) is given by D/dτΘ + 1/3Θ^2 = - R_μν u^μ u^ν = - κ(T_μνu^μ u^ν + T^λ_ λ - 3/2ζΘ) = - κ/2(ρ + 3p̃) = -κ/2(ρ + 3p - 3TdS/dV) . Due to the extra entropic term in the effective pressure p̃, even for matter that satisfies the strong energy condition, ρ + 3p > 0, it is possible to prevent gravitational collapse, i.e. Θ̇+ Θ^2/3 > 0, as long as the production of entropy is significant enough, 3TdS/dV > (ρ + 3p) > 0. § III. ENTROPY OF THE BH HORIZON One can also wonder about the effect of the entropy associated to space-time itself, in particular to horizons. It can be incorporated in a natural way by extending the Einstein-Hilbert action with a surface term, the Gibbons-Hawking-York (GHY) term of Refs. <cit.>. Let us consider a space-time manifold ℳ with metric g_μν, which has a horizon hypersurface that we denote by ℋ. This is a submanifold of the whole space-time. By taking n^μ, the normal vector to the hypersurface ℋ, we can define an inherited metric on ℋ: h_μν = g_μν + n_μ n_ν . With this, one can define the GHY term as S_ GHY = 1/8π G∫_ℋ d^3 y √(h) K , where K is the trace of the extrinsic curvature of the surface. Notice that we are not foliating the entire space-time, but rather considering the properties of a particular hypersurface, the horizon. From the thermodynamic point of view, the GHY term contributes to the free energy of the system. Hence, it can be related to the temperature and entropy of the horizon as <cit.> S_ GHY = - ∫ dt N(t) T S . where we have kept the lapse function N(t), to indicate that the variation of the total action with respect to it will generate a Hamiltonian constraint with an entropy term together with the ordinary matter-energy terms. This leads to the realization that what drives gravity in a thermodynamical context is not just the internal energy of a system, U, but its Helmholtz free energy, F=U-TS. In other words, entropy gravitates, or perhaps we should better say entropy “antigravitates" since it is responsible for a repulsive force. What gravitates is information. §.§.§ Schwarzschild black hole In order to illustrate this, let us now compute the GHY action for the event horizon of a Schwarzschild black hole of mass M. Its space-time is described by the metric: ds^2 = - [1- 2GM/r] dt^2 + [1- 2GM/r]^-1dr^2 + r^2 dΩ^2 . The normal vector to a 2-sphere of radius r around the origin of coordinates is n = - √(1 - 2GM/r)∂_r . With this, the trace of the extrinsic curvature for such a sphere scaled by the metric determinant is √(h) K = (3GM - 2r) sinθ . Integrating over the angular coordinates and setting the 2-sphere at the event horizon, i.e. r = 2GM, and restoring for a moment c, the GHY boundary term becomes S_ GHY = - 1/2∫ dt Mc^2 = - ∫ dt T_ BH S_ BH , where T_ BH is the Hawking temperature and S_ BH is the Bekenstein entropy of the Schwarzschild black hole <cit.>, T_ BH = ħ c^3/8π G M , S_ BH = 4π G M^2/ħ c . This favors the interpretation of the GHY boundary term as a contribution to the Helmholtz free energy in the thermodynamic sense, F=U-TS. Note also that the action (<ref>) is classical, essentially the rest mass energy of the BH, although both the Hawking temperature and Bekenstein entropy are quantum mechanical quantities, associated with the entanglement of the fundamental degrees of freedom between the interior and exterior of the horizon of a black hole. We interpret this result as being an emergent phenomenon, from microscopic degrees of freedom to a coarsegrained description in terms of thermodynamical quantities like temperature and entropy, where all fundamental constants (ħ, , G) cancel out, except c. In Ref. <cit.> we argued that, in the absence of significant clustering or merging, the masses of black holes remained constant and thus there would be no entropy production associated with stellar black holes in our universe. However, let us consider here an alternative possibility. § IV. PBH AT THE ORIGIN OF BOTH DARK MATTER AND DARK ENERGY We consider here the possibility that Dark Matter is composed of Primordial Black Holes (PBH) and that a small fraction of these black holes, with masses M_ BH∼10^6, constitute the seeds of SMBH at the centers of galaxies <cit.>. These black holes accrete mass from the environment at a rate that is conmensurate with the rate of expansion of the universe. When accretion of gas from the surroundings reaches the Eddington limit, the mass of the SMBH grows like <cit.> Ṁ = 4π G m_p/0.1 c σ_T M ≃M/40 Myr = 2/t(z_*) M , where m_p is the proton mass and σ_T is the Thomson cross-section, and we have used t(z_*≃35) = 80 Mpc for a Universe with =0.31. If we now assume that SMBH continue to accrete gas at the Eddington limit with a rate that decreases in time with the density of matter available in the universe (ρ̇/ρ = - 2/t), then the mass of SMBH will grow due to accretion as M∝ t^2 ∼ a^3 = V in the past, at least since z_*≃35, see also <cit.>. If we compute the general relativistic entropic acceleration (GREA) induced by the growth of entropy associated with this increase in mass, V dS_ SMBH = 2 S_ SMBH dV, we see that it contributes with a constant negative pressure p_S = -TdS/dV = - 2TS/V = - N_ SMBHM_ SMBH/V = - ρ_ SMBH , where the total entropy is S = ∑_i S^(i)_ SMBH = N_ SMBHS_ SMBH, with N_ SMBH the total number of SMBH in the universe, assumed constant (i.e. without SMBH merging). We then find the Raychaudhuri equation (<ref>), where we have separated the ordinary (adiabatic) matter characterized by (ρ, p) from the SMBH, Ḣ + H^2 = ä/a = - 4π G/3( ρ + 3p + ρ_ SMBH + 3p_S) = -4π G/3( ρ + 3p) + 8π G/3ρ_ SMBH . The last constant term can be interpreted as an effective and positive Cosmological Constant term Λ=κ ρ_ SMBH, driving cosmic acceleration, while the rest of the (adiabatic) matter and radiation in the universe, the baryons and photons, as well as the PBH that do not accrete significantly and thus act as Cold Dark Matter, would contribute to cosmic deceleration. We can now evaluate the contribution of such a term to the present acceleration of the universe. If PBH constitute all of the Dark Matter in the universe, and a small fraction of these (the seeds of SMBH) have masses that increase with the cosmic volume, then their contribution is identical to that of a cosmological constant with the same density, H^2 = 8π G/3(ρ + ρ_ PBH + ρ_ SMBH) . What these equations (<ref>) and (<ref>) are telling us is that primordial SMBH, rather than contributing to the Dark Matter of the universe today, they are actually the source of Dark Energy. Whether there has been a gradual change from DM to DE over the course of time, as PBH grow due to accretion, is a matter of discussion when compared with cosmological observations <cit.>. There is also the possibility that only a small fraction of the PBH that contributed to DM in the early universe, e.g. just the SMBH in the centers of galaxies, grew sufficiently rapidly to contribute significantly to the GREA of the universe. In that case, the bulk of the PBH would still contribute to DM today and only a small fraction, around = 5×10^-5 of all PBH <cit.>, would contribute to DE in the form of rapidly accreting SMBH with entropy growth associated with their horizons (<ref>). The rapid increase in mass of the SMBH at the centers of galaxies since z_*≃ 35 can quickly increase their contribution to DE (and compensate for their tiny contribution in numbers to the total amount of PBH), = (1+z_*)^3 = 0.69 , for =0.26. A more sophisticated computation may be needed for the case that PBH in a broader range of masses happen to accrete mass at a slower rate but for a longer time, = ∫f(M)/M dM/dz dz , where f(M) is the fraction of DM in the form of PBH that accrete mass with a rate dM/dz. The integral of all contributions would have to be used to estimate not only the actual value of , but also its possible rate of change, and thus the effective DE parameters (w_0, w_a), which could then be compared with observations <cit.>. Such a computation is beyond the scope of this letter. Note that we have assumed above that the SMBH are uniformly distributed in our universe when they started accreting gas from their surroundings at z_*∼35. Their density at that time, ρ_ SMBH≃ρ_c^0 (1+z_*)^3 = 10^6 M_⊙ /(20 kpc)^3, is about a SMBH within a sphere of 20 kpc radius, which would correspond to a comoving radius of less than a Mpc today, well below the scale of inhomogeneities. Therefore, we can ignore possible inhomogeneities in the local GRE acceleration induced by the distribution of SMBH and their mass growth. § V. CONCLUSIONS We have explored in this letter the possibility that GREA may account for the present acceleration of the universe arising from the entropy growth associated with the mass accretion onto SMBH since the cosmic dark ages. This way, a tiny fraction of the PBH that seed structure and contribute to the Dark Matter of the universe, more specifically the SMBH at the centers of galaxies seeded by the massive PBH from the e^+e^- annihilation era <cit.>, would gain mass and drive also the acceleration of the universe. It is only recently that SMBH, seeded by massive PBH, have started to accrete mass and induce a cosmic acceleration via the entropic force associated with GREA. In the past, only matter and radiation drove the decelerated expansion of the universe. When GREA started to drive acceleration via SMBH growth, it acted as an effective cosmological constant term, which eventually dominated the free energy budget. This could explain the actual value of the so-called dark energy density today, as well as the coincidence problem, i.e. why both dark energy and dark matter contributions are of the same order. The local GREA around each SMBH is sufficiently uniformly distributed in the past (when mass growth from accretion was dominant), over comoving scales of order a Mpc, that we can assume homogeneity of the accelerated expansion over the entire universe. Moreover, GREA from the cosmological causal horizon <cit.> is still a valid alternative. Both come from emergent phenomena associated with horizon entropies; they both have a quantum origin in entanglement, and they could have comparable contributions to the present acceleration of the universe. Which one dominates today and how does this splitting determine the rate of change of acceleration over time, and thus observations of the effective Dark Energy parameters (w_0, w_a), is still a matter of investigation. What is the fate of the SMBH's contribution to DE? There will be a time in which mass growth of SMBH will stop, after consuming the majority of the gas in their accretion disks. From then onwards, SMBH will conserve entropy (unless they merge with other SMBH), and thus the associated GREA will stop being a driving accelerating force. In fact, this epoch of stalled mass growth may be near the present age of the Universe. These quiescent supermassive black holes will then only contribute as CDM, like the rest of PBH, and thus will decelerate the expansion of the universe, which will redshift away their energy density and will end in an empty Universe (possibly after the evaporation of all these BH due to Hawking radiation <cit.>), corresponding to a Minkowsky space-time. § ACKNOWLEDGEMENTS The author acknowledges support from the Spanish Research Project PID2021-123012NB-C43 [MICINN-FEDER], and the Centro de Excelencia Severo Ochoa Program CEX2020-001007-S at IFT.
http://arxiv.org/abs/2306.08613v1
20230614161916
Quantum Control of Radical Pair Dynamics beyond Time-Local Optimisation
[ "Farhan T. Chowdhury", "Matt C. J. Denton", "Daniel C. Bonser", "Daniel R. Kattnig" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph", "quant-ph" ]
Department of Physics and Astronomy, University of Exeter, Stocker Road, Exeter EX4 4QL, UK By extending Gradient Ascent Pulse Engineering (GRAPE) to allow for optimising reaction yields, we realise arbitrary waveform-based control in spin-selective recombination reactions of radical pairs in the low magnetic field regime. This overcomes drawbacks of previous time-local optimisation approaches for realising reaction control, which were limited in their applicability to radical pairs driven by high biasing fields. We demonstrate how efficient time-global optimisation of the radical pair recombination yields can be realised by gradient based methods augmented with time-blocking, sparse sampling of the yield, and evaluation of the central single-timestep propagators and their Fréchet derivatives using iterated Trotter-Suzuki splittings. Results are shown for both a toy model, previously used to demonstrate coherent control of radical pair reactions in the simpler high-field scenario, and furthermore for a realistic exciplex-forming donor-acceptor system comprising 16 nuclear spins. This raises prospects for the spin-control of actual radical pair systems in ambient magnetic fields, by suppressing or boosting radical reaction yields using purpose-specific radio-frequency waveforms, paving the way for radical inspired qubit architectures for reaction-yield-dependent quantum magnetometry and potentially applications of quantum control to biochemical radical pair reactions. Quantum Control of Radical Pair Dynamics beyond Time-Local Optimisation Daniel R. Kattnig 0000-0003-4236-2627 14 June 2023 ======================================================================= § INTRODUCTION The rapidly evolving field of optimal quantum control (OQC) seeks to manipulate phenomena at the quantum scale by devising and implementing perturbations, typically in the form of electromagnetic pulses, to steer a given quantum system to a desired target. Fuelled by demonstrable successes in nuclear magnetic resonance (NMR) <cit.> pulse engineering and the control of ultra-fast excited-state reactions by laser fields <cit.>, pioneering developments in “coherent control" were first conceived in the field of chemistry. Today OQC has developed into a mature discipline that is central to modern quantum technologies associated with the second quantum revolution <cit.>, with numerous applications across quantum information processing and quantum metrology <cit.>. We here demonstrate the use of a Gradient Ascent Pulse Engineering (GRAPE)-inspired approach for the open-loop control of singlet recombination yields of radical pairs in weak magnetic fields, thereby putting a renewed spotlight on the control of spin-chemical reactions. Reactions involving the recombination of radical pair intermediates are well-known to depend on spin-degrees of freedom and their dynamics. Specifically, the electronic singlet and triplet states associated with the two unpaired electrons, one in each radical, of the radical pair can undergo coherent inter-conversion as a consequence of symmetry-breaking interactions—in particular, the hyperfine interactions with surrounding magnetic nuclei. As radical pair recombination preserves spin-multiplicity in the absence of strong spin-orbit coupling, provided that the singlet and triplet states exhibit differential chemical reactivity, the radical pair spin dynamics is reflected in the reaction yields realised via the singlet and triplet channels. By coupling to magnetic fields via the Zeeman interaction, radical pairs furthermore acquire sensitivity to static and oscillatory applied magnetic fields. Studies of effects of oscillatory magnetic fields on radical pair reactions have been realised for chemical model systems and are central to identifying magnetosensitive radical pair reactions in biology <cit.>. While a majority of model studies looked at magnetic field effects (MFEs) of monochromatic radio-frequency magnetic fields in the presence of a strong biasing field <cit.>, often referred to as reaction yield detected magnetic resonance (RYDMER), some studies have been realised in a weak static field, in particular for exciplex systems <cit.>. Notably, the radical pair mechanism, or variations thereof <cit.>, are also hypothesised to underpin a compass sense in various animals, including migratory songbirds <cit.>. In this case, the magnetosensitive radical pair is thought to originate from a photo-induced electron transfer reaction in the flavo-protein cryptochrome. Even though the exact nature of the underlying radical pair has remained unclear, the fact that this compass sense is interruptible by weak monochromatic <cit.> or broad-band radio-frequency <cit.> magnetic fields strongly supports an underlying spin mediated mechanism. These observations support the prospect of controlling chemical reactions involving radical pairs, possibly in the near future, in the spin biological context, and the challenging low-field regime, provided that an efficient approach to pulse engineering can be developed. In their recent work, Masuzawa et al. <cit.> suggested the control of radical pair reactions based on model calculations for a prototypical spin system. They focus on the classical RYDMER scenario, for which the applied electromagnetic field is in resonance with the electronic Zeeman splitting produced by a strong static field, perturbs the singlet-triplet interconversion of the radical pair and thus affects the yield of geminate recombination. Their control approach, based on theory by Sugawara <cit.>, propagates the spin density operator whereby at every moment the control amplitude is chosen such that an optimisation criterion is bound to not decrease. Since the algorithm optimises the controls in a time-local fashion, it is dubbed as local optimisation. By construction, the employed algorithm allows optimising the trajectory of an observable commuting with the time-independent dynamics generator, i.e. the drift Hamiltonian, or alternatively, an arbitrary observable at one chosen moment in time (via back-propagation). While the combined population of the singlet (S) and one of the triplet states (T_0) is hence controllable over time in the high-field limit, the more relevant singlet population falls in the second category. Our approach aims overcomes ostensible drawbacks of the time-local optimisation approach by permitting time-global optimisation and by focusing on the singlet recombination yield, i.e. the actual experimental observable (rather than S and T_0-populations over time or the singlet probability at a moment of time). For realising quantum control of radical pair dynamics in a weak static magnetic field, both of these developments are critical. We deviate from the traditional GRAPE by optimising reaction yields (rather than a fidelity measure defined at a defined moment in time), accounting for asymmetric radical pair recombination, and, following de Fouquieres et al. <cit.>, by using curvature information of the loss function (rather than steepest decent/ascent). We furthermore address a major objection to GRAPE-based reaction control of radical pairs, namely high computational demands, as also voiced by Masuzawa et al. <cit.>, by suggesting a block-optimisation scheme and sparse sampling of the reaction yield to improve the efficiency of the approach while retaining adequate control fidelity. § THEORY §.§ Radical Pair Spin Dynamics We consider a system comprising radicals A^∙- and B^∙+ subject to singlet-triplet interconversion, undergoing spin-selective recombination reactions as per Fig. <ref>. The corresponding spin dynamics is described in terms of the time-dependent spin density matrix ρ̂(t) by dρ̂(t)/dt =-i[Ĥ(t),ρ̂(t)] - {K̂, ρ̂(t)}, with Ĥ(t) denoting the spin Hamiltonian and K̂ = k_ S/2P̂_ S + k_ T/2P̂_ T accounting for radical pair recombination. Here, k_ S and k_ T denote the reaction rate constants in the singlet and triplet state, respectively, and P̂_ S, P̂_ T are the projection operators onto the singlet and triplet states. We introduce an effective Hamiltonian given by Â(T) = Ĥ(t) - iK̂ , allowing us to reformulate eq. (<ref>) as ∂ρ̂(t)/∂ t = -iÂ(T),ρ̂(t) = -i(Â(T)ρ̂ -ρ̂Â(T)^†), where A,B = AB -(AB)^† = AB - BA^† ∀ B=B^†. The formal solution to eq. (<ref>) is given by ρ̂(t_n) = Û(t_n,0)ρ̂(0)Û^†(t_n,0) where the time evolution operator in terms of Â(T) is Û(t,0) = 𝒯exp[-i∫_0^tÂ(τ)dτ]. Here, 𝒯 denotes the time ordering operator, that reorders products of time-dependent operators such that their time arguments decrease going from left to right. The singlet channel recombination yield is given by Y_ S=k_ S∫_0^∞P̂_ S(t) dt where P̂_ S(t)=Tr[P̂_ Sρ̂(t)] is the survived singlet probability of the radical pair. Note that eq. (<ref>) is of truncated Linblad form (by introducing shelving states, an equivalent formulation in traditional Lindblad form is also possible <cit.>). Thus, the essentially open quantum system description of radical pair spin dynamics in the presence of spin-selective recombination eq. (<ref>) can be treated in terms of the dynamics of a closed system, albeit with a non-Hermitian dynamics generator, non-unitary evolution operators and non-conserved trace. Here, we consider radical pairs for which the static part of the Hamiltonian, H_0, is of the form Ĥ_0 = Ĥ_A + Ĥ_B + Ĥ_AB where Ĥ_i, i ∈{A,B} is local to radical i and comprises the Zeeman interaction with the static magnetic field and isotropic hyperfine couplings with surrounding nuclear spins. Ĥ_AB accounts for inter-radical couplings in the form of the exchange interaction. Specifically, each radical is described in terms of corresponding nuclear and electron spin angular momentum operators Î_ik and Ŝ_i by Ĥ_i = ω_iŜ_iz+∑_k=1^n_iÂ_ikÎ_ik·Ŝ_i. and Ĥ_AB = -j_ex(2Ŝ_A·Ŝ_B + 1/2) where ω_i=-γ_iB, with γ_i denoting the gyromagnetic ratio of the electron in radical i and B the applied magnetic field, Â_ik is the isotropic hyperfine coupling constant between electron spin i and the k-th nuclear spin (out of total n_i), and the exchange coupling constant is denoted by j_ex. The initial density operator is assumed as ρ̂(0) = 1/Z_1Z_2P̂_ S, where Z_i=∏_k=1^n_i (2I_ik+1) is the total number of nuclear spin states in radical i and P̂_ S= 1/4-Ŝ_1·Ŝ_2, the singlet projection operator. §.§ Gradient-based coherent control Without loss of generality, we can describe a system undergoing spin dynamics under coherent control with Ĥ(t) = Ĥ_0 + ∑_l=1^Mu_l(t) V̂_l, where Ĥ_0 is the drift Hamiltonian describing the time-independent part of the system, i.e. the intrinsic radical pair dynamics in the static bias field, as introduced above, u_l(t) are the control amplitudes and V̂_l is the set of control Hamiltonians that couple the controls to the system, for example via the Zeeman interaction. The goal of OQC is to minimise some control objective functional G[{u_l(t)}], with a control target subject to physical constraints and other criteria. For our purposes, we focus on minimising the singlet recombination yield of the radical pair, i.e. G=Y_ S. However, the approach is easily generalised to focus on other reaction outcomes, e.g. accumulated nuclear polarisation. Significant prior work has been concentrated on optimising G subject to the assumption that the values taken by the controls over time may be parameterised by piece-wise constant control amplitudes in the time domain <cit.>. This paradigm, which is central to GRAPE as the first widely applied quantum control algorithm <cit.>, is well aligned with the practical aspect of controlling radical pairs, as standard arbitrary waveform generator (AWG) based control schemes represent pulse inputs as piecewise constant functions. We thus discretise the time axis, t_0 = 0 < t_1 < t_2 < ⋯ < t_N = t_max and we set Ĥ(t) ≈Ĥ_0 + ∑_l=1^M∑_n=1^Nu_l,nχ_n(t) V̂_̂l̂ with constants u_l,n and the indicator function χ_n(t) defined such that χ_n(t)= 1 for t_n-1≤ t<t_n 0 otherwise. . The corresponding propagators Û(t_n,0) are discretised on the temporal grid such that Û(t_n,0) = 𝒯∏_i=1^nÛ_i = Û_n ⋯Û_1 where Û_i = e^-i Â_i Δ t_i with Δ t_i=t_i - t_i-1 and Â_i=Â(t_i-1). The gradients of G depend on the propagator derivatives, evaluated as ∂Û_i/∂u_l,j = δ_i,j L_i,l = δ_i,j L(i Â_iΔ t_i , iV̂_̂l̂Δ t_i) where L(X,E) is the Fréchet derivative of the matrix exponent defined, for every E, by e^(X+E) - e^X = L(X,E) + o(E). More explicitly, we have <cit.> L(X,E)= ∫_0^1e^X(1-s) E e^Xs ds = ∑_n=1^∞1/n!∑_j=1^n X^j-1 EX^n-j, which also allows us to demonstrate that ∂Û_i^†/∂u_l,j = δ_i,j L(+i Â_i^†Δ t_i ; +iV̂_̂l̂Δ t_i) = δ_i,j L^†(-i Â_iΔ t_i ; -iV̂_̂l̂Δ t_i) = δ_i,j L_i,l^† Combining the above equations, the gradient of the survived singlet probability, P̂_ S(t_n) = Tr[P̂_ SÛ(t_n,0)ρ̂(0)Û^†(t_n,0)], for k ≤ n, is thus ∂P̂_ S/∂u_l,k = -iTr(P̂_ SÛ_+Ŵ_k,l, Û_-ρ̂_0Û_-^†Û_+^†) = -iTr(Û_+^†P̂_ SÛ_+ Ŵ_k,l, Û_-ρ̂_0Û^†_-) = 2[Tr(Û_+^†P̂_ SÛ_+ Ŵ_k,lÛ_-ρ̂_0Û^†_-)] = 2[Tr(P̂_ SÛ_+ Ŵ_k,lÛ_-ρ̂_0Û^†)] where Ŵ_i,l = iÛ_i^-1 L_i,l and Û_+ = 𝒯∏_m=k^nÛ_m Û_- = 𝒯∏_m=1^k-1Û_m. For k > n, the gradient obviously vanishes. In the original formulation of GRAPE, a first-order approximation of L_k,l was used <cit.>, which is obtained with Ŵ_i,l = V̂_̂l̂. For our approach, we use the full Fréchet derivative. Finally, this allows us to approximate the singlet yield by Y_ S = k_ S∫_0^∞P̂_ S(t) dt ≈ k_ S∑_n=0^Nw_n P̂_ S (t_n) with w_n denoting suitably chose quadrature weights. The gradient of the singlet yield is then obtained as ∂ Y_ S/∂u_l,k≈ k_ S∑_n=k^Nw_n∂P̂_ S/∂u_l,k. § RESULTS The control of reaction yields via numerical approaches is expected to be computational costly, which is why Masuzawa et al. <cit.> had dismissed GRAPE in favour of time-local optimisation in their earlier attempt at realising radical pair-reaction control. The computational demand is particularly large if, as we do here (and unlike <cit.>), the goal is to optimise yields, which are time-integral quantities (cf. eq. (<ref>)), unlike the typical fidelity measures in quantum control applications pertaining to the implementation of a state transfer or a unitary operation for a fixed time t. Our approach requires computing N single-time-step propagators alongside their Fréchet derivatives with respect to L control Hamiltonians, and scores of repeated matrix multiplications for evaluating the time-dependent singlet-probabilities and associated gradients, which sum to the recombination yield and its associated gradients. In the Appendix we evaluate the number of required matrix multiplications (scaling with N^2), leading to the demonstrable insight that for the considered scenario and typical Ns and problem sizes the cost of gradient/expectation value evaluation will exceed that of evaluating the elementary propagators and their derivatives (scaling with N). To allow for efficiently optimising the reaction yield of radical pairs, we therefore suggest the adaptation of the gradient-based optimisation procedure as illustrated in Fig. <ref>. First, we argue that the optimisation can be realised in practice in terms of B disjoint blocks of N/B time-steps each (time blocking). Second, we suggest that a sparsely sampled, i.e. crude, approximation of the reaction yield based on N/(BS) samples per block, i.e. sampling every Sth timestep, is also sufficient to adequately optimise the reaction yield. Here, B and S are integers that divide N and N/B, respectively. While this approach formally does not yield the optimum of the yield as formally defined in eq. (<ref>), henceforth referred to as the complete optimum, we argue that, for suitably chosen B and S, yields optimised through sparsely sampled time blocking, realised at a fraction of computational effort, are sufficiently close to the true optimum. Third, to efficiently calculate the exponential propagators and the associated Fréchet derivatives, we use a scaling and squaring approach in combination with a Trotter-Suzuki splitting of the elementary propagator, as detailed in the Appendix. Lastly, to accelerate the optimisation we utilise L-BFGS-B, a limited memory variant of the Broyden–Fletcher–Goldfarb–Shanno algorithm, which preconditions gradients with curvature information in lieu of steepest descent <cit.>. §.§ Protocol applied to a prototypical spin system We consider the minimisation of singlet recombination yield for a generalisation of the prototypical radical pair studied earlier in the work by Masuzawa et al. <cit.>, where coherent control of a fidelity measure, rather than the reaction yield, was achieved in the simpler high-field scenario using time-local optimisation. Specifically, we assume a radical pair comprising 7 spin-1/2 particles, coupled through isotropic hyperfine (0.2, 0.5, 13mumT and 0.2 & 0.33mumT for the two radicals, respectively) and exchange interactions (j_ex/(2π) = 13muMHz), undergoing spin-selective recombination in the singlet state with rate constant k_b = 1 3muμs^-1 or spin independent escape with rate constant k_f = 1 3muμs^-1, such that k_ S = k_b + k_f and k_ T = k_f. Given the high demand of completely optimising the seven-spin system, we have additionally studied the five-spin system resulting from the seven-spin system by retaining the three largest hyperfine coupling constants. Unlike Masuzawa et al. <cit.>, we seek to directly minimise the singlet recombination yield Y_b = k_b/k_b + k_f Y_ S through a piece-wise constant control field applied perpendicular to a static biasing field, i.e. L = 1 and V≡ V_1 = ω_1 (Ŝ_1,x + Ŝ_2,X), where ω_1 is the maximal Rabi frequency of the control field; control amplitudes are subject to |u_1,n|≡|u_n| ≤ 1. We chose a discretisation time step of 13muns and controlled the first 5μ3mus (N = 5000) after initialisation of the spin system in the singlet electronic state (thus accounting for more than 99.3% of radical decay; final yields still evaluated in the t→∞∼ t_max limit with t_max=14μ3mus). Fig. <ref> illustrates exemplary time dependent singlet probabilities, controlled and unperturbed, associated with this scenario for the biasing field intensities B_0 =0.53mumT and 103mumT. Efficient evaluation of single time-step propagators, and their Fréchet derivatives, was made possible via iterated Trotter-Suzuki splittings, and time blocking and sparse sampling of reaction yields, as introduced above. In view of Fig. <ref> and comparable datasets for other biasing fields provided in the Supplemental Material (cf. Fig. S3), it is apparent that excellent control results can be achieved for all N/B block sizes exceeding 500. Neither B = 1 and N = B are strict necessities for practical control of the reaction outcome, substantially smaller blocks yield comparable optimisation results at a fractional cost. On the other hand, block sizes of 50 and 100 are unsatisfactory, indicating that a time-local optimisation is insufficient to yield optimal control results in the low to intermediate field region. For these inadequate block sizes, the optimisation result of the local BFGS optimisation algorithm is furthermore strongly dependent on the initial condition. Note that intermediate block sizes N/B are occasionally preferred over larger optimisation blocks, as the latter are less prone to get stuck in local minima, such that the overall optimisation outcome can be superior. This is, for example, seen in Fig. S3 for a biasing field of 13mumT, for which the best result of 6 replicated optimisations with B=2 and B=5 happened to surpass those with B=1. Note furthermore that the optimisation results for smaller biasing fields, e.g. 0.53mumT, is more sensitive to the sampling interval, S, than for 103mumT. This is not surprising in view of the more varied singlet probability as a function of time observed for the lower field. This is encouraging insofar as more complex spin systems with diverse hyperfine interactions are expected to be less oscillatory. In any case, it appears that sampling intervals S of 25 and less are adequate here. For the seven-spin system we found it impractical to realise a complete set of optimisations for S=1 and B=1, i.e. complete, unblocked optimisation, as optimisation runs initiated from random configurations required more than a week to finish on the hardware utilised. However, a complete optimisation starting from the best blocked optimisation finished swiftly for the used tolerances without yielding a significant additional improvement. Furthermore, a single complete optimisation yielded results comparable to those of the good blocked and sparsely sampled optimisations. For the smaller 5-spin system system, it is apparent that the sparsely sampled, blocked optimisations smoothly converge to the completely sampled optimum and that, again, B > 1 and S > 1 allow excellent practical optimisation outcomes relative to this (expensive) limit. See Fig. <ref>b and the Supplemental Material for these and further optimisation results achieved for the 5 and 7-spin systems in weak magnetic fields, including the geomagnetic field (cf. Fig. S1 and Fig. S2). With the approach established, we set out to explore the prospects offered by controlling radical pair reaction yields in the weak to intermediate field case. Fig. <ref>a illustrates the results of minimising the singlet recombination yield as a function of the applied static magnetic field relative to the unperturbed scenario. These results were realised using 5 blocks of 1000 time steps of 13muns and sampling the recombination yield from every 25th time point. The maximal control amplitude was 0.253mumT. The plotted whiskers indicate 75-percentiles from 7 optimisations starting from random initial controls. Evidently, the recombination yield can be markedly reduced for all applied fields, the lowest yield realisable being 0.0966, realised for a biasing field of 93mumT. This amounts to a reduction by more than a factor of 3.5 compared to the uncontrolled yield at the same field (0.346). Generally, it is remarkable that the yields controlled for minimal recombination are all markedly lower than the minimal recombination yield of 0.252 realisable by applying a static magnetic field of 0.23mumT, i.e. the field matching the dip in the singlet yield attributed to the low-field effect. We have further analysed the dependence of the optimisation outcome on the strength of the control field. Fig. <ref>b shows Y_b as a function of the maximal control amplitude for the case of a static biasing field of 103mumT and 503muμT, again realised by optimising 5 blocks of 1000 time steps, sampling every 25^th. Appreciable minimisation for B_0=103mumT requires control amplitudes of 0.13mumT; controls with field strength below ≈ 503muμT do not elicit appreciable effects. This finding is in line with the well-studied effects of radio-frequency magnetic fields on radical pairs. In this scenario too, a resonant perturbation can only appreciably impact the spin evolution during the radical pair lifetime, τ, if -γ B_1/(2π)τ⪆ 1, i.e. if the lifetime is sufficiently large to permit at least one Larmor precession in the field associate with the perturbation. As here k_f^-1 = 13muμs and k_b^-1 = 13muμs, we can expect effects for B_1 exceeding 363muμT to 723muμT to become effective, which is in tentative agreement with the observed B_1-dependence. On the other hand, driving fields larger than 0.13mumT allow very significant reductions of the recombination yield. It is interesting to note that for controlling reaction outcomes in the geomagnetic field (B_0 = 503muμT) slightly larger control fields exceeding 0.123mumT are required. Above we have assumed that controls are applied with a time-resolution of Δ t= 13muns. While AWGs are available with sufficient sample rates and analog bandwidths (e.g. 503muGS/s at 153muGHz bandwidth) and the delivery of broadband magnetic fields at the required intensity has been realised (e.g. for high-field ENDOR spectroscopy <cit.>), the question of the minimal bandwidth to control radical pair reactions is a pertinent one. To this end, we have studied the dependence of the control result on the “speed limit” enforced on the control signal. Specifically, we stipulated that the controls must obey |u_n-u_n-1|< u̇_maxΔ t. For B_0=13mumT, Fig. <ref>c summarizes the dependence of the singlet yield on this speed limit. We find that transition times of u̇_max^-1⪅ 83muns are required to elicit significant control; fast controls only deliver marginal additional gains. The minimal speed limit corresponds to a bandwidth of approximately u̇_max/2 = 633muMHz, which is of the order of, but smaller than, the magnitude of the spread of energy eigenvalues of the radical pair (933muMHz). §.§ Protocol applied to a realistic exciplex-forming radical pair We now present results for coherently controlling the symmetric recombination of a large spin system consisting of 18 spins. We consider the pyrene/para-dicyanobenzen (PY/DCB) exciplex-forming donor-acceptor systems in a solvent of moderate polarity <cit.>. The magnetosensitive, delayed fluorescence emission of this system has been widely studied. To make this large scale simulation tractable, we limit ourselves to the scenario of homogeneous decay, i.e. k_ S= k_ T=k, and neglect inter-radical interactions. While this may appear crude at first glance, models typical of this have seen considerable success in the interpretation of experimental data <cit.>. The approximations are adequate since for the majority of its lifetime, the radical pair is in fact diffusively well separated, and the moderate reaction asymmetry of the real system (resulting from different electron transfer rate constants in the singlet and triplet state; the latter to the locally excited triplet state) has all but a small effect on the magnetic field sensitivity. Thus, in this limit, the singlet probability can be expressed as P̂_ S(t) = e^-kt(1/4+∑_αβ∈{x,y,z} R^(1)_α,β(t)R^(2)_αβ(t)) , where R^(i)_αβ(t) = 1/Z_i Tr[Ŝ_iα(0)Ŝ_iβ(t)] = 1/Z_i Tr[Ŝ_iαÛ^†(t,0) Ŝ_iβÛ(t,0) ] are the electron spin correlation tensors for each spin in radical i, which are defined or calculable in the Hilbert space of the individual radicals. Here, Û accounts for the coherent evolution only, i.e. Â(T) = Ĥ(T) (cf. eq. (<ref>)). Thus, control of the radical pair reaction yield can be realised by separately calculating the gradients for each of the spin correlation tensors of two recombining radicals, and assembling the gradient of the reaction yield using the chain rule. Again, we follow the ideas of blocking the time axis and sparse sampling to increase the efficiency. In addition to the Trotter-Suzuki splitting discussed above, we use the fact that the Hamiltonian is only a function of the total nuclear spin associated with groups of completely equivalent spins, which permits an efficient calculation in the “coupled representation” <cit.>. Fig. <ref> shows the minimisation of the PY/DCB singlet recombination yield for k^-1 = 2003muns, a control amplitude of 0.23mumT, and a static basing field of 103mumT. The control was extended to the first 1024 time steps of 13muns each. Varying block and sample sizes, again confirms practically sufficient fidelity from non-complete optimisation, as is demonstrated in Fig. <ref>b (also see Fig. S4). § DISCUSSION In electronic and nuclear spins of organic radical pairs, coherence lifetimes on the order of 1003muns to microseconds are typical, even in complex biological environments <cit.>, opening a window of opportunity to realise control by applying time-dependent magnetic fields to manipulate reaction outcomes. Yet, the approach is still in its infancy. Experimental approaches have been limited to perturbing radical pair spin systems by radio-frequency magnetic fields <cit.>. While positioned far from optimal control, these experiments are crucial as they demonstrate the principal possibility of influencing radical pair dynamics by RF-magnetic fields in weak magnetic fields. Examples of RF-magnetic field effects are furthermore well-established for biological radical pairs in vivo <cit.>, such as those supposedly underpinning the avian compass sense. The latter are particularly noteworthy, because surprisingly weak RF-magnetic fields have been found to interrupt magnetoreception, suggesting an exquisitely sensitive underlying system of remarkable coherence time. Optimal quantum control protocols could be used to drive chemical and spin biological systems in a controlled and selective way, whereby reaction outcomes not attainable by static magnetic field perturbations alone become accessible. For quantum information processing in near term quantum hardware, among the many hurdles faced in counteracting environment-induced decoherence is the need to realise precise timing of quantum gate operations <cit.> in nanosecond timescales. To evade this bottleneck, precise or adaptive <cit.> quantum control protocols may be harnessed to change the frequency of an applied control field dynamically throughout its pulse duration. But this is often hard to realise in practice, since formulating the appropriate optimal quantum control protocols is often highly non-trivial. In nature, radical pairs seem to evade the effects of decoherence to retain exceptionally long coherence times, resulting in exquisite and unparalleled magnetometric sensitivity even in extremely noisy biological settings. Thus, optimal quantum control protocols, formulated with the primary aim of reproducing this sensitivity in spin-chemical reactions in solution, could also inform methods for similarly realising artificial or biological molecular <cit.> qubits highly sensitive to applied magnetic fields. This could translate to further applications <cit.> in both quantum metrology and quantum information processing. But to realise this goal, and future applications of optimal quantum control in a potentially biological context, efficient approaches for control design are vital. The control of radical pair reactions is distinct from the majority of OQC approaches insofar as that the optimisation target is not defined at a particular instant of time, but encoded in the time-evolution from radical-pair creation to its eventual decay, t ∈ [0,∞), i.e. the reaction yield. Here, we have generalised the GRAPE-paradigm of using piecewise constant control amplitudes to realise reaction yield control. The suggested algorithm aims to overcome limitations of a previous approach by targeting actual reaction yields (rather than continuously updating the control to optimise a fidelity function involving observables that commute with the Hamiltonian or the singlet probability at one instance of time) and permitting control in weak magnetic fields. A time-global update scheme of control parameters (as opposed to the time-local approach from <cit.>), central to GRAPE, has here been found essential to realise the latter efficiently. The computational cost of optimisation of radical pair reaction yields by GRAPE has been viewed as prohibitive <cit.>. Significant costs are associated with the evaluation of numerous single time-step propagators and their Fréchet derivatives, and, for the optimisation of reaction yields, in particular the calculation of the yield and its gradient through iterated matrix multiplications involving the former. Here, we have introduced a block-optimisation scheme and sparse sampling of the reaction yield to improve the efficiency of this step while retaining adequate control fidelity. To facilitate the efficient evaluation of propagators and their derivatives we have utilised an iterated Trotter-Suzuki splitting. We also exploit curvature information of the loss function in lieu of standard steepest decent/ascent approaches to reduce the number of optimisation steps required, as previously suggested <cit.>. These additional modifications we incorporate into the standard approach are vital for addressing the high computational costs, which had impeded earlier attempts to realise GRAPE-based reaction control of radical pair dynamics. We note that while sparse sampling of the reaction yield provides a useful, low-cost proxy to the actual reaction yield, we found that sampling of the singlet probability at one moment in time is often misleading the optimisation in weak magnetic field insofar as the increased/decreased probability at the chosen time does not necessarily correlate with increased/decreased reaction yield. On the other hand, as we have shown, sparse sampling at a moderate sampling density reduces the computational cost while yielding close to optimal optimisation outcomes and a smooth progression for B > 1. We have further demonstrated the applicability of the approach as implemented here to realistically complex radical pair systems, such as the widely studied PY/DCB exciplex system subject to symmetric recombination. The suggested approach can serve as a blueprint for future developments and optimisations. Although not explored here, some algorithmic modification could be tried, such as bottom-up strategies for which the sampling density is increased as the optimisation progresses. In addition, optimisation blocks could be overlapped. While here these approaches appeared unnecessary insofar as the suggested approach yielded control parameters that upon complete optimisation did not change significantly subject to the utilised termination criteria of the optimiser, these approaches could further speed up the rate at which this state is reached. Additional computational efficiency could furthermore be realised by reformulating the singlet yield evaluation in terms of the propagation of wave-packets, in particular in combination with using an incomplete basis for the nuclear spin functions and trace sampling <cit.>. We are currently developing such approaches with the aim of controlling the open system dynamics <cit.> of radical pairs. As for the calculation of propagators and derivatives, higher order operator splitting approaches could be used and the order of the approach, i.e. accuracy, to the optimisation process increased as the target function converges, as has been successfully demonstrated in <cit.> for a state transfer problem. Here, we have again refrained from these additional optimisations, as the generation of propagators and derivatives was not identified as the most time-consuming step for the considered problem sizes, which is related to the evaluation of reaction yields and gradients instead. From the reaction-control point of view, it is noteworthy that control has allowed us to suppress the singlet recombination to levels below those achievable by application of static magnetic fields of any strength. The achievable recombination yields are furthermore well-below the recombination yield of 253mu% expected to ensue for complete randomisation of the spin states. This shows that the control fields here do not merely enhance spin relaxation, as one might expect, but drive the system to optimal reaction outcomes. We anticipate that optimal control of radical pair reactions will find applications in the context of biological radical pair reactions, either as an analytical tool or to drive the systems towards desired outcomes. For example, for biological radical pair reactions, such as those implicated with magnetoreception, a key problem to be addressed concerns distinguishing between different incarnations of radical reactions to better identify the underlying mechanism in a complex biological environment <cit.>. In this scenario, quantum control could allow us to derive control parameters that suppress magnetic field effects in one mechanism while leaving those of alternative, competing models unaltered. Ultimately, control could allow us to turn MFEs into a tool to potentially control reaction processes of physiological relevance for applications in biology and medicine, by facilitating selective stimulation or suppression of cellular functions linked to radical pair processes. We hope that this work, by building on established techniques from quantum control like GRAPE to go beyond existing local optimisation methods, will lay the groundwork for realising this. § CONCLUSIONS We have suggested and implemented a GRAPE-inspired approach to control reaction yields of radical pair recombination reactions through synchronised application of radio-frequency magnetic fields in a direction perpendicular to a static biasing field. We deviate from previous approaches by optimising reaction yields, i.e. the actual experimental observables, instead of fidelity measures defined at a particular time instants. Computational efficiency has been realised by block optimisation, sparse sampling, a split operator approach for calculating propagators and by utilising curvature information in the non-linear optimisation problem. The approach thus realised is applicable to radical pair systems of realistic complexity. We have demonstrated that control permits the reduction of the singlet recombination yield of a singlet-born model system to 103mu%, a yield unattainable by static magnetic field application alone, when using moderate control amplitudes and bandwidth (cf. Fig. <ref>). We hope that our approach helps accelerate the adoption of control approaches for radical pair recombination reactions in low external fields. We anticipate that these future applications will utilise control beyond mere reaction control, for example to realise distinguishability of radical pairs in complex environments and for facilitating model selection for competing hypothesis based on experiments with optimised contrast power. § APPENDIX §.§ Sparse Blocking The control of reaction yields via numerical approaches like GRAPE is expected to be costly. The computational demand is particularly large if, as we do here (and unlike <cit.>), the goal is to optimise yields of radical pair reactions, which are time-integral quantities (cf. eq. (<ref>)); unlike the typical fidelity measures in quantum control applications pertaining to the implementation of a state transfer or a unitary operation for a fixed time t). The approach requires repeated matrix multiplications for evaluating the time-dependent singlet-probabilities that sum to the recombination yield (and its associated gradients). Formally, eq. (<ref>) implies ∑_n=1^N[(n-1) +3] + nL[(n-1)+3]=O(N^3) matrix multiplications (i.e. for every t_n, (n-1) matrix multiplications are required to evaluate Û, (n-1) to evaluate Û_+Ŵ_n,lÛ_- (via L_n,l) for every l and n; and an additional three each to evaluate the expectation value/gradients). In practical implementations, the estimate in eq. (<ref>) can be improved upon by preserving terms from previous n, i.e. n-1. allowing the update of Û and Û_+Ŵ_n,lÛ_- with a singular matrix multiplication (instead of n-1), thus requiring ∑_n=1^N4(1+ nL) = 2LN^2 +2(L+2)N matrix multiplications. Nevertheless, due to the quadratic scaling of the evaluation of eq. (<ref>) with N, the gradient/expectation value evaluation will often exceed the cost of evaluating the elementary propagators and their derivatives (scaling as N) for large N and thus be the limiting factor. To efficiently optimise the reaction yield of complex radical pairs, we suggest the following adaptation of the gradient-based optimisation procedure. We postulate that the optimisation can be realised in terms of B disjoint blocks of N/B time-steps each (time blocking), with the initial state taken as that of the system subject to the preceding optimised controls, sparsely sampled using N/(BS) samples per block, as illustrated in Fig. <ref>. Here, B and S are integers that divide N and B, respectively. The number of matrix multiplications in this approach is reduced to B(∑_n=1^N/B(1+ nL)[1+3(δ_n mod s,0)]) = B(∑_n=1^N/B(1+ nL)+3∑_m=1^N/BS(1+ LS)) = N^2L(S+3)/2BS+N(2L+1+3/S). The number of matrix multiplications necessary in general still scales quadratically with N, i.e. as O(N^2/B). Linear scaling is only obtained in the limit that B=N, i.e. time-local optimisation. But in practice, the block sise can be chosen much smaller than N, i.e. N/B ≪ N, to realise a substantial speedup, whilst delivering adequate practical optimisation of the reaction yield. Furthermore, for S≫3, the complexity of the algorithm approaches independence of S. §.§ Trotter-Suzuki Splitting To efficiently calculate the propagators and the associates Fréchet derivatives, we use a scaling and squaring approach in combination with a Trotter-Suzuki splitting of the elementary propagator. Specifically, we compute the matrix exponent e^X=(e^2^-sX)^2^s by s ∈ℕ repeated squaring starting from e^2^-sX calculated as e^2^-sX= e^-i2^-s-1Δ tÂ'e^-i2^-sΔ t”e^-i2^-s-1Δ tÂ'+O(Δ t^2/2^2s), with X=-iÂ'dt -i”dt being a convenient splitting of X=-i dt Â_i (cf. eq. (<ref>). Practically, Â' and ” represent the drift and control Hamiltonians respectively. If the latter is based on coupling applied magnetic fields via the Zeeman interaction, the propagator under ” can be expressed as a direct product of evolution operators for the individual electron spins based on expression e^-iωn̂· S=cos(ω/2)I - 2isin(ω/2)n̂· S where ω∈ℕ is a parameter related to the strength of the control field and n in a unit vector specifying the orientation of the perturbation in the laboratory frame. Likewise, the Fréchet derivative of the exponential e^X can be evaluated using an s-fold recurrence, based on <cit.> L(X,E)= e^X/2 L(X/2, E/2)+L(X/2, E/2)e^X/2 obtained by differentiating e^X=(e^X/2)^2 using the chain rule. In combination with eq. (<ref>), the only derivatives that are explicitly required in this processes are those of e^-i2^-sΔ tA”. For Zeeman coupling with a single-spin propagator as given by eq. (<ref>), the necessary Fréchet derivatives can be obtained in analytic form, in combination with the chain rule, from L(ωn̂· S, -in̂· S) = -icos(ω/2)n̂· S - 1/2sin(ω/2)1. We optimised using the limited memory variant of the Broyden–Fletcher–Goldfarb–Shanno (l-BFGS) algorithm, which preconditions gradients using curvature information, and used the implementation available in SciPy <cit.> with a stopping criterion of 10^-6 and 10^-7 for the change of the sparsely sampled yield and the projected gradient, respectively. The maximum number of iterations was set to 200. We further ran simulations in parallel for varying number of blocks B and the sampling interval S, as seen in Figs. <ref>(a) and <ref>(b), to assess the performance and scalability of our approach. This work was supported by the Office of Naval Research (ONR Award Number N62909-21-1-2018). We acknowledge use of University of Exeter’s HPC facility. 10 cont22 Koch et al. Quantum Optimal Control in Quantum Technologies. Strategic report on current status, visions and goals for research in Europe. EPJ Quantum Technol., 9(1):19, 2022. kosloff86 David J. Tannor, Ronnie Kosloff, and Stuart A. Rice. Coherent pulse sequence induced control of selectivity of reactions: Exact quantum mechanical calculations. J. Chem. Phys., 85(10):5805–5820, 1986. chemOC David J. Tannor and Stuart A. Rice. Control of selectivity of chemical reaction via control of wave packet evolution. J. Chem. Phys., 83(10):5013–5018, 1985. fmQC Tobias Brixner and Gustav Gerber. Quantum control of gas-phase and liquid-phase femtochemistry. Chem. Phys. Chem., 4(5):418–438, 2003. jonSQC Jonathan P. Dowling and Gerard J. Milburn. Quantum Technology: The Second Quantum Revolution. Philos. Trans. R. Soc. A, 361(1809):1655–1674, 2003. pophof23 Blanka Pophof, Bernd Henschenmacher, Daniel R. Kattnig, Jens Kuhne, Alain Vian, and Gunde Ziegelberger. Biological effects of electric, magnetic, and electromagnetic fields from 0 to 100 mhz on fauna and flora: Workshop report. Health Phys., 124:39–52, 2023. pophof23b Blanka Pophof, Bernd Henschenmacher, Daniel R. Kattnig, Jens Kuhne, Alain Vian, and Gunde Ziegelberger. Biological effects of radiofrequency electromagnetic fields above 100 mhz on fauna and flora: Workshop report. Health Phys., 124:31–38, 2023. wasi83 Michael R. Wasielewski, Christian H. Bock, Michael K. Bowman, and James R. Norris. Controlling the duration of photosynthetic charge separation with microwave radiation. Nature, 303(5917):520–522, 1983. rydmrA Kevin B. Henbest, Philipp Kukura, Christopher T. Rodgers, P. J. Hore, and Christiane R. Timmel. Radio frequency magnetic field effects on a radical recombination reaction: A diagnostic test for the radical pair mechanism. J. Am. Chem. Soc., 126(26):8102–8103, 2004. rydmrB Christopher T. Rodgers, C. J. Wedge, Stuart A. Norman, Philipp Kukura, Karen Nelson, Neville Baker, Kiminori Maeda, Kevin B. Henbest, P. J. Hore, and C. R. Timmel. Radiofrequency polarization effects in zero-field electron paramagnetic resonance. Phys. Chem. Chem. Phys., 11:6569–6572, 2009. rydmrC C. J. Wedge, Christopher T. Rodgers, Stuart A. Norman, Neville Baker, Kiminori Maeda, Kevin B. Henbest, C. R. Timmel, and P. J. Hore. Radiofrequency polarization effects in low-field electron paramagnetic resonance. Phys. Chem. Chem. Phys., 11(31):6573–6579, 2009. exci G. Grampp, M. Justinek, and S. Landgraf. Magnetic field effects on the pyrene—dicyanobenzene system: determination of electron self-exchange rates by mary spectroscopy. Mol. Phys., 100(8):1063–1070, 2002. mani22 Tomoyasu Mani. Molecular qubits based on photogenerated spin-correlated radical pairs for quantum sensing. Chem. Phys. Rev., 3(2):021301, 2022. kt21 Daniel R. Kattnig. F-cluster: Reaction-induced spin correlation in multi-radical systems. J. Chem. Phys., 154(20):204105, 2021. hore16 P. J. Hore and Henrik Mouritsen. The radical-pair mechanism of magnetoreception. Annu. Rev. Biophys., 45:299–344, 2016. ritz09 Thorsten Ritz, Roswitha Wiltschko, P. J. Hore, Christopher T. Rodgers, Katrin Stapput, Peter Thalau, Christiane R. Timmel, and Wolfgang Wiltschko. Magnetic compass of birds is based on a molecule with optimal directional sensitivity. Biophys. J., 96:3451–7, 2009. engels14 Svenja Engels, Nils-Lasse Schneider, Nele Lefeldt, Christine Maira Hein, Manuela Zapka, Andreas Michalik, Dana Elbers, Achim Kittel, P. J. Hore, and Henrik Mouritsen. Anthropogenic electromagnetic noise disrupts magnetic compass orientation in a migratory bird. Nature, 509:353–6, 2014. mae K. Masuzawa, M. Sato, M. Sugawara, and Kiminori Maeda. Quantum Control of Radical Pair Reactions by Local Optimization Theory. J. Chem. Phys., 152:014301, 2020. mae2 A. Tateno, K. Masuzawa, H. Nagashima, and Kiminori Maeda. Anisotropic and Coherent Control of Radical Pairs by Optimized RF Fields. Int J Mol Sci., 24(11):9700, 2023. sugi03 M. Sugawara. General formulation of locally designed coherent control theory for quantum system. J. Chem. Phys., 118(15):6784–6800, 2003. qLbfgs P. de Fouquieres, S.G. Schirmer, S.J. Glaser, and Ilya Kuprov. Second order gradient ascent pulse engineering. J. Magn. Reson., 212(2):412–417, 2011. gauger11 Erik M. Gauger, Elisabeth Rieper, John J. L. Morton, Simon C. Benjamin, and Vlatko Vedral. Sustained quantum coherence and entanglement in the avian compass. Phys. Rev. Lett., 106:040503, 2011. machnes11 S. Machnes, U. Sander, S. J. Glaser, P. de Fouquières, A. Gruslys, S. Schirmer, and T. Schulte-Herbrüggen. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework. Phys. Rev. A, 84:022305, 2011. oqc05 Navin Khaneja, Timo Reiss, Cindie Kehlet, Thomas Schulte-Herbrüggen, and Steffen J Glaser. Optimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms. J. Magn. Reson., 172(2):296–305, 2005. wilcox67 R. M. Wilcox. Exponential Operators and Parameter Differentiation in Quantum Physics. J. Math. Phys., 8:962–982, 1967. endor M. M. Hertel, V. P. Denysenkov, M. Bennati, and T. F. Prisner. Pulsed 180-ghz epr/endor/peldor spectroscopy. Magn. Reson. Chem., 43(S1):S248–S255, 2005. kt11 Daniel R. Kattnig, Arnulf Rosspeintner, and Günter Grampp. Magnetic field effects on exciplex-forming systems: the effect on the locally excited fluorophore and its dependence on free energy. Phys. Chem. Chem. Phys., 13:3446–3460, 2011. hoang14 Hao Minh Hoang, Thi Bich Van Pham, Günter Grampp, and Daniel R. Kattnig. Exciplexes versus loose ion pairs: How does the driving force impact the initial product ratio of photoinduced charge separation reactions? J. Phys. Chem. Lett., 5(18):3188–3194, 2014. hayashi04 Hisaharu Hayashi. Introduction to Dynamic Spin Chemistry : Magnetic Field Effects on Chemical and Biochemical Reactions. Vol. 8 in World Scientific Lecture and Course Notes in Chemistry. World Scientific Publishing Company, 2004. atkins19 Chadsley Atkins, Kieran Bajpai, Jeremy Rumball, and Daniel R. Kattnig. On the optimal relative orientation of radicals in the cryptochrome magnetic compass. J. Chem. Phys., 151(6):065103, 2019. kt16 Daniel R. Kattnig, Ilia A. Solov'yov, and P. J. Hore. Electron spin relaxation in cryptochrome-based magnetoreception. Phys. Chem. Chem. Phys., 18:12443–12456, 2016. qGate23 Haochuan Mao, Gediminas J. Pažėra, Ryan M. Young, Matthew D. Krzyaniak, and Michael R. Wasielewski. Quantum gate operations on a spectrally addressable photogenerated molecular electron spin-qubit pair. J. Am. Chem. Soc., 145(11):6585–6593, 2023. impTC Jake Xuereb, Paul Erker, Florian Meier, Mark T. Mitchison, and Marcus Huber. The impact of imperfect timekeeping on quantum control. arXiv, 2023. adQC T Brixner and G Gerber. Adaptive quantum control of femtochemistry. Phys. Scr., 2004(T110):101, 2004. cont22b David L. Goodwin, Pranav Singh, and Mohammadali Foroozandeh. Adaptive optimal control of entangled qubits. Sci. Adv., 8(49):eabq4244, 2022. qiQC Wasielewski et al. Exploiting chemistry and molecular systems for quantum information science. Nat. Rev. Chem., 4(9):490–504, 2020. brieg10 Jianming Cai, Gian Giacomo Guerreschi, and Hans J. Briegel. Quantum control and entanglement in a chemical compass. Phys. Rev. Lett., 104:220502, 2010. bio-comp22a Jia-Yi Wu, Xin-Yuan Hu, Hai-Yuan Zhu, Ru-Qiong Deng, and Qing Ai. A Bionic compass based on Multiradicals. J. Phys. Chem. B, 126(49):10327–10334, 2022. bio-comp22b Ziyuan Li, Huapeng Yu, Tongsheng Shen, Ye Li, and Wenjun Zhang. Bionic Magnetic Compass Algorithm Based on Radical Pair Theory. IEEE Sens., 22(24):23812–23820, 2022. fay21 Thomas P. Fay, Lachlan P. Lindoy, and David E. Manolopoulos. Spin relaxation in radical pairs from the Stochastic Schrödinger equation. J. Chem. Phys., 154:084121, 2021. kosloff22 Shimshon Kallush, Roie Dann, and Ronnie Kosloff. Controlling the uncontrollable: Quantum control of open-system dynamics. Sci. Adv., 8(44):eadd0828, 2022. kt19 H. G. Hiscock, T. W. Hiscock, D. R. Kattnig, T. Scrivener, A. M. Lewis, D. E. Manolopoulos, and P. J. Hore. Navigating at night: fundamental limits on the sensitivity of radical pair magnetoreception under dim light. Q. Rev. Biophys., 52:e9, 2019. qcRp21 Amit Finkler and Durga Dasari. Quantum sensing and control of spin-state dynamics in the radical-pair mechanism. Phys. Rev. Applied, 15:034066, 2021. nt21 Xu et al. Magnetic Sensitivity of Cryptochrome 4 From a Migratory Songbird. Nature, 594(7864):535–540, 2021. expFr Awad H. Al-Mohy and Nicholas J. Higham. Computing the fréchet derivative of the matrix exponential, with an application to condition number estimation. SIAM J. Matrix Anal. Appl, 30(4):1639–1657, 2009. scipy Virtanen et al. and SciPy 1. 0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods, 17:261–272, 2020. = 23pt = 17pt =649pt = 461pt 1014 Supplemental Material Quantum Control of Radical Pair Dynamics beyond Time-local Optimisation Farhan T. Chowdhury, Matthew C. Denton, Daniel C. Bonser, and Daniel R. Kattnig^* Department of Physics and Living Systems Institute University of Exeter, Stocker Road, Exeter, Devon, EX4 4QD, United Kingdom E-mail: [email protected] We provide further results for the minimisation of the recombination yield of the 5-spin system in the low-magnetic field regime, including the geomagnetic field. This is followed by additional results for the 5-spin systems controlled under 1003muμT, with and without a speed limit imposed. To further support the case for global optimisation over time-local methods, which roughly correspond to what is achievable with GRAPE for insufficient block sizes, we further present plots for 5-spin systems showing singlet recombination yields as a function of S for yield minimisations using B blocks of N/B steps each. Finally, an additional figure shows the frequency profile and recombination yields realized using l-BFGS complemented GRAPE for increasing sample intervals for the 18 spin PY/DCB system driven by an RF control field of 0.13mumT.
http://arxiv.org/abs/2306.01832v1
20230602180001
Exploring Proxies for the Supermassive Black Hole Mass Function: Implications for Pulsar Timing Arrays
[ "Joseph Simon" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.HE" ]
http://arxiv.org/abs/2306.05904v1
20230609135837
Lattice calculation of the $D_{s}$ meson radiative form factors over the full kinematical range
[ "R. Frezzotti", "G. Gagliardi", "V. Lubicz", "G. Martinelli", "F. Mazzetti", "C. T. Sachrajda", "F. Sanfilippo", "S. Simula", "N. Tantalo" ]
hep-lat
[ "hep-lat", "hep-ph" ]
justification=raggedright,singlelinecheck=false
http://arxiv.org/abs/2306.05452v1
20230608180000
The Multi-temperature Blackbody of Kilonovae
[ "Albert Sneppen" ]
astro-ph.HE
[ "astro-ph.HE" ]
Albert Sneppen [email protected] 0000-0002-5460-6126]Albert Sneppen Cosmic Dawn Center (DAWN) Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, København Ø 2100, Denmark The early spectra of the kilonova AT2017gfo have a remarkably smooth blackbody continuum, which reveals information on the thermal properties and radioactive heating within the ejecta. However, the widespread use of a single-temperature blackbody to fit kilonova data is theoretically invalid, because 1) the significant travel-time delays for a rapidly cooling surface result in a broad distribution of temperatures and 2) the relativistic Doppler correction varies across different surface elements. Thus, the observed spectrum will be a modified blackbody with a range of temperatures over the surface. In this paper we quantify the impact of these effects and illustrate the typical wavelength-dependent spectral corrections. We apply the multi-temperature blackbody framework to the first epoch X-shooter AT2017gfo spectrum, to deconvolve the underlying physical temperature at the photosphere from the relativistic Doppler shift. We show that cooling and Doppler effect individually results in a variation of temperatures over the photosphere of up to 30%, but in combination these effects nearly cancel and produce the single-temperature blackbody observed. Furthermore, we show that for more rapidly expanding ejecta, v ≈ 0.4-0.5c, one may be able to constrain the cooling rate and the expansion velocity purely from the spectral shape. § INTRODUCTION Early kilonova spectra are often modelled with a single-temperature blackbody continuum as observationally validated in the early spectra taken of the well-studied kilonova AT2017gfo <cit.>. However, even an emitted single-temperature blackbody will not be observed with a single temperature due to the mildly relativistic velocities, the significant time-delays, and the rapid cooling rate of kilonovae <cit.>. This is caused by three competing and complementary effects. Firstly, the light travel-time across the ejecta is several hours, in which time the ejected material itself cools rapidly. Therefore, the nearest front of the ejecta (which is observed at the latest time of the expansion) is cooler than more distant portions of the ejecta. Secondly, the different parts of the ejected material is Doppler boosted by different amounts depending on their varying projected velocity along the line-of-sight. Thirdly, the thermalisation depth is wavelength-dependent, which suggests photons at different wavelengths originate from different radial depths and potentially temperatures. While the wavelength-dependent opacity is well-known and discussed <cit.>, the spectral and statistical significance of the latitude-dependent effects have yet to be quantified. Therefore, in Sec. <ref> we derive the multi-temperature blackbody and illustrate the spectral significance for typical kilonova velocities, time-scales and cooling-rates. In Sec. <ref>, we review and summarize the prior constraints for cooling rates and characteristic velocities for AT2017gfo. In Sec. <ref>, we explore the constraints attainable by modelling and fitting this multi-temperature blackbody for the 1st epoch X-shooter spectrum of AT2017gfo. Lastly, we discuss the applicability and significance of the multi-temperature frameworks for future kilonovae observations and the implications for wavelength-dependent thermalisation depths. § DERIVATION OF THE MODIFIED BLACKBODY §.§ Relativistic Doppler Corrections The effective temperature, T_ obs, (in the observers frame of reference) is boosted from the temperature in the emitted frame, T_ emitted, by the relativistic Doppler correction, δ(μ). T_ obs(μ) = δ(μ) T_ emitted (μ) = 1/γ1/1-βμ T_ emitted (μ) Here, γ is the lorentz factor, β is the velocity of the expanding photosphere, v_bb, in units of the speed of light and μ = cos(θ) with θ being the angle between the direction of expansion and the line-of-sight. Naturally, the Doppler correction varies across the different surface-elements. §.§ Temperature-cooling and light-travel time effects The temporal evolution of temperature in a kilonovae atmosphere is theoretically set by the competition between adiabatic cooling, radiative cooling and heating from decaying elements. The decays from a large statistical ensemble of isotopes is predicted to provide a heating rate well-approximated by a single powerlaw <cit.>. Indeed, the observed light-curves of AT2017gfo (from 1 to 6 days post-merger) show that the bolometric luminosity is well-described by such a powerlaw-decay, L∝ t^-1 <cit.>, which has been interpreted as observational evidence of heating <cit.>. Thus, given the blackbody nature of early emission, the standard prescription for the temperature as a function of the time since the merger is the powerlaw relation: T_ emitted(t) = T_ ref·(t(μ)/t_ ref)^-α = T_ emitted(μ) Here, T_ ref is the reference temperature at some reference time, t_ ref, while the time-delay to different surface elements, t(μ), is set by the geometry. The typical estimates for the powerlaw-index, α≈ 0.5 <cit.>. We emphasise, the powerlaw evolution in temperature is a theoretically-motivated prescription, which empirically breaks down in later phases but provides a decent fit across spectra in the photospheric epochs <cit.>. Eq. <ref> yields the emitted temperature, T_ emitted(μ), given any powerlaw index α and a set of travel-times to the different surface-elements. The surface of equal arrival time for a spherically expanding photosphere is given by R_ ph(μ)/R_ ph(μ=1) = (1-β)/(1-βμ) <cit.>. Inserting the identity for the surface of equal arrival time, yields the light travel-time difference between the ejecta's nearest front and the surface at μ; t(μ=1)-t(μ) = R_ ph(μ=1)- μ· R_ ph(μ)/c = R_ ph(μ=1)/c( 1 - μ1-β/1-βμ) = β t(μ=1) 1-μ/1-βμ §.§ Receding photosphere A notable systematic effect for this analysis is the sub-linear growth of the photospheric surface. The ejecta approaches homologous expansion, R ∝ t, within a timescale of minutes post-merger <cit.>. However, as the outer layers becomes increasingly optically thin, the photospheric surface recedes deeper into the ejecta. It is computationally straightforward to include any temporal evolution in velocity within Eqs. <ref> and <ref> given any parameterization of the velocity, v_bb(t) (or equivalently v_bb(μ)). We test a simple linear expansion in velocity around 1.4 days post merger v_bb(t) = v_0 - a · (t-1.4 days) where a is the recession-rate of the photosphere, to model the sensitivity of this analysis to this effect. For AT2017gfo, the X-shooter spectra at epoch 1 and 2 (1.4-2.4 days post merger) both the photospheric velocity from the 1 μm P Cygni feature <cit.> and the velocity inferred from the blackbody fits <cit.> decreases from 0.28c to 0.25c. Thus, given the X-shooter velocity constraints the expected parameter-values are around v_0 ≈ 0.28c and a≈ 0.03 c/day. For the characteristic light-travel time-scale of several hours, this effect is a noteworthy but relatively minor correction when compared to the other effects discussed (see Fig. <ref>). Note, blackbody fits to earlier spectra (at 0.49 and 0.53 days post-merger) are less constraining, but are still consistent with the velocity inferred 1.4 days post-merger <cit.>. This may indicate the recession-rate is even smaller in the period up to the 1st epoch X-shooter spectrum. §.§ The integrated expression Ultimately, this suggests that the observed temperature is a convolution of cooling temperatures and varying Doppler corrections. To determine the corresponding spectral shape of a multi-temperature blackbody, we must integrate the blackbodies over the emitting area and the solid angle (and in extension over the range of effective temperatures). The specific intensity I_λ(μ) is defined as the energy carried by a light ray from the surface-element at μ per unit of time, wavelength, area and solid angle. Integrating over the area and solid angle yields the specific luminosity <cit.>: L_λ = ∫ I_λ(μ) μ dA dμ dϕ When the radiation is in thermodynamic equilibrium, then the initial specific intensity is given by the Planck function, I_λ(μ)=B_λ(T_ obs(μ)), at the observed temperature, T_ obs(μ) = δ(μ) T_ emitted(μ). It is a mathematical property of a blackbody in bulk motion, that while the relativistic transformations boost the observed luminosity, this is exactly matched by a higher effective temperature <cit.>. That is the relativistic Doppler correction of a blackbody is indistinguishable from a blackbody with a shifted effective temperature. In the following, we deliberate on each of the constituent integrals (ie. the solid angle and the emitting area): Firstly, one can equivalently define the integral over the solid angle from a distant observers point-of-view (where μ≈ 1) as from the integral over the solid-angle of emitted rays at a fixed location of the photosphere. Both perspectives are mathematically identical as each surface-element, dμ dϕ, also corresponds to a specific emitted angle from any fixed point on the surface. Nevertheless, we consider the integral from the latter perspective as it is far more mathematically tractable given Eq. <ref>-<ref>. On the photospheric surface, the integral over the projection μ would in the co-moving frame of the ejecta range from 0 to 1, but the lower limit of the integral would increase to β in an observer's frame due to the aberration of angles (as detailed in <cit.>). As the expression is independent of the azimuthal angle ϕ due to cylindrical symmetry, the integral over this angle is trivial ∫_0^2π dϕ = 2π. Secondly, the integral over the emitting area for a spherical photosphere is ∫_A dA = 4π R_ph^2 (1-β/1-βμ)^2, where the geometrical factor in parenthesis accounts for light arriving from the limb having been emitted relatively earlier than from the front, when the photosphere had a smaller surface area <cit.>. The spherical geometry assumed here is that of the observed photosphere <cit.> which is not necessarily identical to the total opacity. These two may differ as there is a larger column-density for line-of-sight from higher latitudes, θ, which may shift the last-scattering surface outwards. This effect is sensitive to the exact composition, density and opacity profile, with the maximum shift being for ejecta with a relatively uniform radial structure. However, typical models suggesting a rapid radial decline in opacity and density with ρ∝ r^-n where n ∈ [3;6] (see for instance <cit.>), which would suggest a 2-3% level difference at the half-light radius. This bias is 0.7-1 dex smaller than the characteristic time-delay, so the observed spherical photosphere used in this analysis is close to the underlying geometry of the ejecta. This near correspondence is observationally corroborated by the consistency in geometry over multiple epochs <cit.>, whereas this limb-projection bias (if significant) in contrast should evolve in a time-dependent manner. Inserting the integral identities in Eq. <ref> yields the functional form for the specific luminosity: L_λ = ∫_0^2π∫_β^1 ∫_A B_λ(T_ obs(μ)) μ dA dμ dϕ = 2π∫_β^1 B_λ(T_ obs(μ) ) 4 π R_ph^2 (1-β/1-βμ)^2 μ dμ = 8 π^2 R_ph^2 ∫_β^1 B_λ(δ(μ) T_ ref(t(μ)/t_ ref)^-α) (1-β/1-βμ)^2 μ dμ = 4 π R_ph^2 π B_λ(T_ obs) f(β, λ) where we have first evaluated the integral over dA and dϕ, then written the full projection-dependent blackbody for each surface element, and finally introduced the wavelength-dependent correction, f(β, λ), to the spectral shape from relativistic and time-delay corrections. If the wavelength-dependence is small, f(β,λ) ≈ f(β), then the luminosity-weighted spectrum remains largely well-described by a single blackbody. One can, without loss of generality, include the time-delay effect of the receding velocity by inserting β(t) in Eq. <ref>. In Fig. <ref>, we illustrate the resulting change in blackbody shape caused by varying the velocities, temperatures or recession-rates of the photospheric surface given the spherically expanding photosphere observed in <cit.>. The characteristic shape is a blackbody with a less prominent peak and an excess of emission in the tails. We note, that to detect varying Doppler corrections from the spectral shape necessitates mildly relativistic velocities. For instance, a velocity of v_bb = 0.3c produces variations in temperature of around 30% over the surface, but only results a bias of a few percent at any optical and near-infrared wavelengths. Additionally, the spectral shape is only sensitive to the spread of effective temperatures, so there is a large degeneracy between varying rates of cooling and varying the Doppler correction. § TEMPERATURES AND VELOCITIES OF AT2017GFO An initial estimate for the temporal evolution of temperature can be found by comparing the spectral energy distribution (SED) across epochs. Fitting best-fit blackbody models to follow-up photometry suggests the effective temperatures from 0.5 to 5.5 days is consistent with a power-law decline in time T_ obs∝ t^-0.54 ± 0.01 <cit.>. Note, this inferred rapid cooling observed in effective temperature is due to both the intrinsic cooling of rest-frame temperature and the bias of decreasing expansion velocities, which lead to progressively less blueshifted ejecta. The photospheric approach is further biased as it precludes any contribution of emission and absorbtion lines which become increasingly biasing at later times. For these reasons, the X-shooter spectra temperatures presented in <cit.> and <cit.> provides the as-of-yet most comprehensive estimate of the evolution of blackbody temperatures for AT2017gfo. Emerging features can be explicitly parameterised and fit, while the rest-frame temperature can be inferred using the observed temperature and velocity-constraints from the 1μm P Cygni feature <cit.>. Fitting across epochs from 1.4 to 5.4 days after the merger, suggest a less-steep powerlaw with the rest-frame temperature decline being roughly consistent with T_ emitted∝ t^-0.46 ± 0.05. The slower rate of cooling for the the effective temperature is because the contribution of the decreasing Doppler corrections is deconvolved, while the overall constraining power is significantly reduced as these high-quality X-shooter spectra are rarer and taken over a smaller dynamical range in time. Given the characteristic size and time-delays for the Epoch 1, these cooling-rates suggest significant temperatures variations of order 500 K over the observed surface. There are several estimators of the typical velocities in AT2017gfo. Firstly, given an known distance the overall luminosity is set by the cross-sectional emitting area of the blackbody. This area can (using the time since the GW signal) be converted into the typical cross-sectional velocity, which as first computed in <cit.> for epoch 1 is around 0.3c. The most recent and tight analysis which uses the improved peculiar velocity constraints for distance in <cit.>, found the cross-sectional velocity to be v_⊥ = 0.285 ± 0.012 <cit.>. In addition, the identification of a Sr^+ P Cygni lines in <cit.> allows a spectral fit of the velocity of the line, which yields a photospheric velocity of v_ph = 0.278 ± 0.001 c. We note the small statistical uncertainty of the line-velocity estimate does not include the large systematic uncertainties associated with line-blending and reverberation effects. Nevertheless, across these many prior analysis the characteristic velocity suggests the Doppler boost will incoherently shift the effective temperature of different surface elements by up to 30%, which would produce a significantly wider SED than a pure blackbody spectrum. Curiously, the progressively increasing Doppler Correction from the orthogonal to the line-of-sight is largely off-set by the rapid cooling (see Fig. <ref>). That is for the characteristic velocities <cit.> and cooling-rates <cit.> for AT2017gfo the variations with wavelength, f(β,λ), are below 1% over the entire spectral range. This is merely a co-incidence of the characteristic velocities, the time-scales and the rate of cooling, which combine to yield a remarkable coherent observed effective temperature across the observable surface. § FITTING THE SPECTRAL CONTINUUM OF AT2017GFO The fitting framework of Eq. <ref> allows an exploration of the four parameters setting the observed blackbody: the intrinsic temperature, the relativistic Doppler correction, the cooling-rate and the blackbody normalisation. The parameters are fittable for an observed spectrum containing well-constrained spectral lines. The closest data-set within current observations to these requirements is the first epoch X-shooter spectrum of AT2017gfo (see Fig. <ref>). This spectrum has a distinct blackbody peak separate in wavelength from the P Cygni spectral component which in <cit.> is associated with three Sr^+ lines at 10 039, 10 330 and 10 918 Å. We note another potential interpretation is that this feature originates from the Helium 10 833 Å line, which as shown in <cit.> in the limiting case of non-local thermodynamic equilibrium condition and with He masses significantly greater than the fiducial values of their models could reproduce the observed feature. Nevertheless, as discussed in <cit.>, non-thermal particles may play a significant role in KNe and the fraction of He produced in neutron star mergers varies greatly between numerical models, so this identification cannot currently be ruled out. For this analysis one can include nuisance parameters to model the 1 μm P Cygni and tentative emission lines at 1.5 μm and 2 μm or entirely exclude the wavelengths affected by all these lines; the constraints provided are robust and we derive equivalent constraints on the blackbody width regardless of parameterizing or excluding these wavelengths. Regardless, the continuum properties have several interesting features. §.§ The intrinsic temperature The observed temperature, i.e. the combination of the intrinsic temperature, T_emitted, and the relativistic Doppler corrections, can be deconvolved for any velocity, v_bb (see Fig. <ref>). As quantified in Sec. <ref> for the characteristic cooling-rates and velocities of AT2017gfo the variations in the effective temperature across the surface remains small. In contrast, the temperature in the emitted frame varies significantly across the surface due to light-travel time effects. Naturally, larger characteristic velocities implies lower intrinsic temperatures, but the decline in temperature with velocity is slower than the often used T_ emitted = T_ obs/δ(μ=1), which does not account for the full luminosity-average over the visible surface. We note, given the strong constraints on the cooling-rate and velocities, one can determine the underlying physical temperature, which is essential for understanding the ejecta, the heating process and the local thermodynamic equilibrium. Combining the velocity and cooling-rate derived from X-shooter spectra <cit.>, we determine the emitted temperature for the ejecta nearest the observer, T_ emitted(μ=1,t = 1.4 days) = 4150 ± 60 K and for the ejecta furthest from the observer T_ emitted(μ=β,t = 1.4 days) = 4900 ± 70 K. §.§ The modified blackbody-width The modified blackbody-width can be fit for, which allows constraints on the range of effective temperatures observed in a single epoch. For instance, the observed near-perfect blackbody continuum of AT2017gfo immediately suggests that parts of the parameter-landscape are unlikely. A rapidly expanding atmosphere, v_bb≈ 0.4-0.5c, a constant temperature, or a more rapid cooling than observed for AT2017gfo, would all produce a wider blackbody and thus be disfavored by the best-fit χ^2. In contrast, there are two scenarios which result in small spectral continuum deviations with f(β,λ) ≈ f(β). That is 1) the characteristic velocities are small (resulting in shorter timescales for cooling and smaller Doppler corrections), or 2) the velocities and cooling-rates offset each other. In the case of AT2017gfo, the characteristic velocity and cooling-rates inferred from cross-spectra analysis (see Sec. <ref>) are exactly in this later regime. However, given the diversity of kilonovae properties in numerical simulations, specifically in terms of cooling-rates and characteristic velocities <cit.>, future objects may reside in parts of the parameter landscape, where these latitudinal effects do not cancel. We illustrate the constraints on the spectral shape in the left panel of Fig. <ref>, showing the difference in goodness-of-fit, Δχ^2(α,β) = χ^2(α,β) - χ^2(α,β), ie. the difference in χ^2 between any combination of fixed parameters and the best-fit over the entire parameter landscape. Due to the high signal-to-noise and more than 46,000 unique wavelengths in the spectrum the increase in χ^2 from modifying the blackbody is statistically significant. For smaller velocities the blackbody shape provides increasingly less stringent constraints. In contrast, with increasing velocity both time-delays and Doppler variations grow, which result progressively larger deviations from a single-temperature blackbody and a smaller region of parameter landscape where the two angular effects cancel. While the statistical constraints in fitting the multi-temperature blackbody are highly significant, we urge caution when interpreting only the statistical uncertainty. Modelling is limited by as-of-yet undiagnosed spectral features as such systematic effects may also create deviations from a simple blackbody and are ill-constrained without a prior model of what spectral features are present. Additionally, as mentioned previously, not only angular but also radial variations in temperature can produce a broadening of the blackbody. If future observations suggest a multi-temperature blackbody it will be ill-constrained from a single spectrum whether this is caused by radial or angular temperature-variation. However, with spectra across multiple epochs constraining the cooling-rate, the latitudinal effects can be isolated as done in this analysis. The single-temperature blackbody observed in AT2017gfo suggests limits on both radial and angular variations in temperature are attainable (see Sec. <ref>). §.§ The normalisation The normalisation is uniquely set by the temperature, the emitting area and the distance to the source as follows from Stefan–Boltzmann law. Therefore given any prior constraints on the emitting area and temperature <cit.>, one can translate any normalisation to an inferred distance. As shown in the right panel of Fig. <ref>, this implies that the best-fit distance varies across the parameter-landscape. Note, one could also define the emitting area directly from the expansion velocity of the blackbody and the time since merger, which would mainly emphasise the generic relationship between emitting area and inferred distance not the subtler dependence on the normalisation parameter. Using a strong prior on the cosmological distance [such as provided by surface brightness fluctuations <cit.>, the fundamental plane <cit.> or the expanding photosphere method <cit.>], one could further constrain the cooling-rate α, the velocity β or T_ emitted. That is that underlying parameters of the blackbody must not only satisfy a goodness-of-fit but also fall within a limited cosmological range in distance. For instance, the parts of the parameter-landscape which yield luminosity distances within 3σ agreement of d_L=44.5 ± 0.6 reported in <cit.> are indicated by the red hatched region. However, given the sizeable statistical and systematic uncertainties in measuring cosmological distances and the relatively small variations in the inferred distance found across the parameter-landscape, invoking a prior on d_L is unlikely to provide tight constraints on other parameters (especially in comparison with the significant variation of χ^2 illustrated in the left panel of Fig. <ref>). § DISCUSSION We have derived and quantified the corrections on the spectral continuum of kilonovae due to relativistic Doppler and time-delay effects. These corrections are essential for determining the intrinsic physical temperature in the ejecta, as shown in this analysis for the epoch 1 X-shooter spectrum. More broadly, the multi-temperature blackbody framework provides a new avenue for constraining properties for kilonovae with high SNR spectra spanning ultraviolet, optical and near-infrared wavelengths. These high signal-to-noise spectra covering a broad dynamic range of wavelengths are required to constrain the percent-level continuum variations for the mildly relativistic velocities. Interestingly, while the standard framework of fitting single-temperature blackbodies is at least theoretically invalid for relativistic and/or rapidly cooling ejecta, it still provides a convenient functional approximation and proves remarkably coherent for the characteristic velocities and cooling-rates of AT2017gfo. We note the multi-temperature perspective is interesting in the context of future kilonova detections for probing their potentially diverse properties, where the blackbody width may provide independent and corroborating constraints. Follow-up spectroscopy of future kilonovae will peer into even earlier phases of the expansion, potentially exploring higher velocities, v_bb≈ 0.4c-0.5c, as suggested in some hydrodynamical merger simulations <cit.>. Given the strongly non-linear relation between the expansion velocity and the magnitude of SED variations seen in Fig. <ref>, such observations will significantly deviate from a simple blackbody model. Lastly, we note that the observational practice of fitting a Planck function to kilonova spectra is remarkably accurate when compared to current numerical simulations. Particularly, the wavelength-dependent opacity of the atmosphere and the temperature-gradients within the ejecta may naively imply the need for additional spectral corrections as different radial temperatures introduce an additional broadening of the blackbody. As such, any prescription for the opacity and temperature-gradient can easily be modelled and included within the framework of this analysis (alongside the other parameters investigated). However, modelling is complicated as the opacity is dominated by lines from elements, which currently have limited atomic data and incomplete line-lists <cit.>. The significant wavelength-dependent opacity of kilonova atmospheres suggests different wavelengths probe different radii <cit.>, which, if combined with a radial temperature gradient, would suggest different wavelengths are probing different characteristic temperatures (ie. the observed spectrum is a convolution of radial blackbodies). A convolution of either radial and angular temperature variations will produce a broader blackbody, while the combination of radial and angular variations will be larger than either effect individually. Therefore, the single-temperature nature of the observed blackbody necessitates 1) the uniform effective temperature over the observed latitudes of the photosphere, but it also requires 2) a rapid transition to optical thickness as the temperature at the thermalisation depth must be relatively consistent across wavelengths <cit.>. This analysis illustrates that the angular variations in T_ obs are small for AT2017gfo, but to reproduce the observed single-temperature blackbody also requires a small variation of T_ obs over the radial range of thermalisation depth. Any quantified limits require further research to both constrain the wavelength-dependent thermalisation depth and the nature of the gradient in temperature. Such discoveries may provide additional insights into the origin of the blackbody spectrum and the environment of kilonova atmospheres. The author would like to thank Darach Watson, Stuart Sim, Ehud Nakar, Oliver Just and Andreas Flörs for useful discussions and insightful feedback. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. § DATA AVAILABILITY Work in this paper was based on observations made with European Space Observatory (ESO) telescopes at the Paranal Observatory under programmes 099.D-0382 (principal investigator E. Pian), 099.D-0622 (principal investigator P. D’Avanzo), 099.D-0376 (principal investigator S. J. Smartt) and 099.D-0191 (principal investigator A. Grado). The data are available at http://archive.eso.org. aasjournal
http://arxiv.org/abs/2306.08047v1
20230613180619
Information compression via hidden subgroup quantum autoencoders
[ "Feiyang Liu", "Kaiming Bian", "Fei Meng", "Wen Zhang", "Oscar Dahlsten" ]
quant-ph
[ "quant-ph" ]
Shenzhen Institute for Quantum Science and Engineering and Department of Physics, SUSTech, Nanshan District, Shenzhen, China. Shenzhen Institute for Quantum Science and Engineering and Department of Physics, SUSTech, Nanshan District, Shenzhen, China. [email protected] Department of Physics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong SAR Shenzhen Institute for Quantum Science and Engineering and Department of Physics, SUSTech, Nanshan District, Shenzhen, China. [email protected] HiSilicon Research, Huawei Technologies Co., Ltd., Shenzhen, China [email protected] Department of Physics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong SAR Shenzhen Institute for Quantum Science and Engineering and Department of Physics, SUSTech, Nanshan District, Shenzhen, China. Institute of Nanoscience and Applications, Southern University of Science and Technology, Shenzhen 518055, China We design a quantum method for non-linear classical information compression. For compressing data obeying symmetries of the so-called hidden subgroup type, we prove an exponential speedup of quantum algorithm in terms of query complexity. We then generalize the method to a variational quantum algorithm that automatically compresses time-series data stored in a database with a priori unknown symmetries of the hidden subgroup type. The automatic compression exploits an encoder that computes the hidden subgroup and a decoder that reconstructs the data using the group structure. The algorithm can thus be viewed as a synthesis of hidden subgroup quantum computing and quantum autoencoders. The output of our algorithm compares favourably with that of a deep classical autoencoder for a tractable illustrative example. Our results show how quantum computers can efficiently compress certain types of data that cannot be efficiently compressible by classical computers. As an additional application, the computational advantage of the quantum compressor over its classical counterpart can be transformed into a quantum advantage for intelligent energy harvesting. Information compression via hidden subgroup quantum autoencoders Oscar Dahlsten July 31, 2023 ================================================================ § INTRODUCTION Compression is the task of reducing data size by appropriate encoding, which increases information storage and transmission efficiency. It is an important technology that has wide applicability. For practical viability, it is important that the coding and decoding process can both be computed efficiently <cit.>. A prominent method of compression is autoencoders <cit.>, artificial neural networks that can automatically learn to compress unlabeled data without prior knowledge of the underlying patterns. Exploiting a bottleneck structure, an autoencoder can automatically extract essential features as compressed data that can be reconstructed to the original data <cit.>. Recently, the quantum version of autoencoders, to our knowledge first proposed independently in Refs. <cit.>, has been applied to encode <cit.>, compress <cit.>, classify and denoise <cit.> quantum data. Experimental implementations <cit.> suggest that quantum autoencoders are a promising tool for quantum machine learning and quantum networks even in the noisy intermediate-scale quantum era. Quantum algorithms are able to extract features in the data that are not efficiently accessible to classical computers. For instance, the quantum period finding algorithm <cit.> is exponentially faster than the classical algorithm in identifying the period of a function. More generally, the hidden subgroup problem (HSP) <cit.> represents a broad class of problems (include period finding) that do not have efficient classical algorithms. Whereas there are many types of HSP that have efficient quantum algorithms. Can quantum speedup on HSP be turned into an advantage for information compression? To tackle this question, we combine quantum autoencoders and quantum HSP algorithm and create a concrete algorithm to compress data that admits symmetries of the HSP type. We proved an exponential speedup in query complexity of quantum algorithms in data compression with symmetries of the hidden subgroup type, extending the quantum computational advantage in HSP to data compression. Then we extend this algorithm to a variational quantum auto-encoder (Fig. <ref>) by designing a parameterized quantum circuit for HSP, making the variational algorithm capable of finding the hidden subgroup automatically. This is achieved by establishing a parameterized quantum circuit ansatz for quantum Fourier transforms that cover a wide range of the Abelian HSP case. We give such an algorithm and implement it explicitly on simple compression examples where classical computers are used to simulate (small) quantum computers. Our algorithm opens a new direction for quantum machine learning, where we use an ansatz on the circuit structure, which generalizes known quantum algorithms, to achieve more efficient feature extraction. Our algorithm suggests a promising advantage for quantum computers in compressing sequential data over its classical counterpart. Furthermore, this computational advantage in compression can then be mapped to a thermodynamic advantage of energy harvesting, where an intelligent extractor with access to a quantum computer can effectively extract more work within a limited amount of time. § PRELIMINARIES §.§ Data compression in database. Compression plays a crucial role in optimizing data storage and data analysis by eliminating duplicated data, which otherwise consumes unnecessary resources within a database. A database is maintained by an index structure and physical data stored on the hardware. Modern implementation of a database usually allocates a certain amount of physical memory before any data is stored in it. The size the database takes on the hardware is referred to as its physical size, while the size of data stored inside the database is referred to as logical size. Because physically deleting data is resource-consuming, to save the cost, the deleted data is not physically erased immediately after its deletion, but it will be overwritten when new data comes. This type of deletion strategy is known as logical deletion, which is a standard protocol used in file management systems <cit.>. Here we aim to develop a framework for data compression by logically deleting any duplicated values in a database. More formally, let 𝒬 be a database that returns a value 𝒬(i) upon query on index i. The data 𝒬(i) is said to be a duplicated value if there exists an index j such that 𝒬(i) = 𝒬(j). We want to logically delete the duplicated value 𝒬(i) by labeling the index i as deleted and ready for storing new data. This logical deletion can be accomplished by a characteristic function c that specifies if a position of the database is free for storing new data, c(i) = 1 if i is free to use, 0 if i is occupied. With this characteristic function c, the database 𝒬 can be freely modified on the index set F={i|c(i)=1}, which is referred to as the set of free indices, without affecting its data integrity. To retrieve the original data from the database resulting from logical deletion, we also need a query function q that helps us to retrieve the correct data in the modified database 𝒬', 𝒬 (i) = 𝒬'(q(i)) . for any new database state 𝒬' with modifications made on the free indices F. Under some conditions, a classical database can also be queried quantum mechanically<cit.>.For example, a compact disk, which is an optical device that stores classical data, can be queried by quantum lights in the sense that data stored at different indices can be queried simultaneously and a state of superposed data will be returned. In the rest of the paper, we will regard the database 𝒬 as an oracle that can be queried by a quantum computer, which is a unitary that maps |i⟩|0⟩ to |i⟩|𝒬(i)⟩. The quantum computer can prepare a superposition state 1/√(N)∑_i |i⟩|0⟩ to query the database and obtain a superimposed data set 1/√(N)∑_i |i⟩|𝒬(i)⟩ as output, where N is the number of indices in the database. §.§ Time series data generation and compression via symmetry. We consider a database 𝒬 that stores data of discrete time-series (or other sequential data) generated from a source that is of length N. The data can be understood as an ordered sequence {x_0, x_2,...,x_i,...,x_N-1}, where each data point x_i ∈{0,1}^m is a bit string of length m. Here the indices i's are referred to as the values of time and querying the database 𝒬 on i gives the value x_i stored in position i. If each x_i is independently sampled from a uniform distribution, then this sequence cannot be compressed. However, if each sequence of the source is generated by a function f, such that x_i = f(i) for i = 1, ..., N, then each data sequence can in principle be compressed as a description of the corresponding function f. From now on, we refer to this function f as the generating function of the time series and use i as the time variable. In fact, the data stored in the database can be compressed even if the exact expression of the generating function is not fully recognized. Different levels of knowledge of the generating function will produce different levels of compression ratio. For example, consider the data sequence {1,4,9,16,25,16,9,4,1,4,9,16,25,16,...} generated by a function f. On the first level, it is obviously periodic, with f(i) = f(i + 8 k ), for all integer k. Identifying this periodic pattern in the database helps us to reduce its redundancy by only keeping the record of its first period { 1,4,9,16,25,16,9,4 }, and marking all other indices free to use by setting c(i)=1 for all i>8. On the second level, the numbers have reflection symmetry within one period, i.e. f(i) = f(j) whenever the two indices satisfy i+j = 10. This reflection symmetry helps us to further compress the data to {1,4,9,16,25}. On the third level, we identify the exact expression of the generating function and the data can be further compressed to the description of the function f(i)= i^2 for i = 1,2,3,4,5, and all the indices in the database can be marked free to use. But even if we do not get the exact expression of the generating function, the periodic structure and the reflection symmetry in the data can be used to compress the data. This type of compression can be achieved by identifying the symmetry in the data. In this paper, we study data compression via automatic identification of the symmetries of its unknown generating function. Since symmetry is better described by groups, it inspires us to formulate such a data compression task in terms of the HSP, which is introduced in the following section. We take particular interest in those sets where the different f's have the same hidden subgroup that describes the symmetry, e.g. sequences with the same periodicity r in the expression f(i+r)=f(i) , ∀ i but with different function values. For such sources of data, we expect this method to be particularly powerful, as all the data sequences generated can be described by the same symmetry. From now on, we restrain our discussion for time series data of length N=2^n, in order to achieve a simple circuit implementation (details in Appendix <ref>). In this way, any index i can be identified to a binary string 𝐢=i_1i_2...i_n, through a one-to-one correspondence τ: i = τ(𝐢) = 2^n-1 i_1 +2^n-2 i_2 + … + 2 i_n-1 + i_n , where i_k is the k-th bit in the bit string 𝐢. Bold fonts will represent bit strings of the binary representation. Then the generating function f(i) can be naturally identified to f(𝐢), and we will use these two notations interchangeably. §.§ Hidden subgroup problems as data compression. Introduction to the HSP. The HSP is a computational algebraic problem that concerns hidden symmetries in functions. The well-known quantum algorithm to solve this problem has an advantage in query complexity compared with its classical counterparts <cit.>. We will show in the following that solving HSP can also be considered as a way to compress data, since it can identify duplicated data values. Given an Abelian group G and a subgroup H, suppose that a function f: G →{0,1}^m satisfies f(i_1) = f(i_2) i_1-i_2∈ H . We then say that H is a hidden subgroup of f, and f has the symmetry of the hidden subgroup H. The HSP is to find H, given an oracle of f, where the oracle is a black box that returns f(i) upon each query of i. An example of HSP is to find the period of a sequence. For example, {1,4,9,16,25,1,4,9,16,25,1,4,9,16,25} is a sequence of length 15 which has a period of length 5, because f(i) = f(j ) if and only if i - j = 0, 5, or 10. Identifying the correct period is equivalent to identifying the correct hidden subgroup of _15, and in this example, the hidden subgroup is {0, 5, 10}<_15. Efficiently assigning group structures to the set of indices. In order to formulate database compression as an HSP, we have to assign a group structure to the indices of the database that stores the sequence. For simplicity (details in Appendix <ref>), we assume that the data sequence is of length 2^n, where n is the number of bits required to specify an index i. Let us begin with examples and then present the general theory. Suppose that the indices of a sequence take values i=0,1,2,3,...,7; then we can represent the indices by a bit string of length 3, say, 𝐢=000,001,010,...,111, where the bold font is used to represent bit strings. There are at least 3 types of group structures that can be specified: _2 ×_2 ×_2, _4×_2 or _8. One way is to define the group operation between any bit strings as the bit-wise addition modulo 2, such that 011 ⊕ 101 =110, and in this way the group is isomorphic to _2 ×_2 ×_2. A second way is to define the group operation as the standard addition of binary numbers modulo 8, say, 010+011=101, and then the group of binary strings is isomorphic to _8. A third way is to divide the bit string into two parts, where the first part consists of 2 bits and the second part consists of only 1 bit. Given two bit strings abc and xyz, define the group operation * as abc * xyz = Append[ab + xy 4 , c⊕ z] , where “Append" is the string operation that appends the second string to the end of the first string. In this way, the group is isomorphic to _4 ×_2, where we divide the bits into two parts and the first part is added using + modulo 4 and the second part is added using bit-wise addition ⊕. A further freedom in assigning group structures is isomorphisms induced by permuting the digits in the bit string; instead of directly defining the group structure to the bit string representation of indices i, we can first apply a bit-permutation and then define the group structure. For example, the group structure _4 can be assigned to the set of indices (bit strings of length 2) via the following one-to-one correspondences (isomorphisms) _4 = { 0, 1, 2, 3 } G_0 = { 00, 01, 10, 11 } G_1 = { 00, 10, 01, 11 } , where G_0 is the canonical way of assigning the group structure and G_1 is assigned similarly but after we permute the first and the second bit. More generally, the group structure is specified in three steps. Suppose we want to assign a group G that is isomorphic to _2^m_1×_2^m_2× ... ×_2^m_q, where m_1,m_2,...,m_q form an integer partition of n such that m_1 +m_2+...+m_q=n. First, we identify the index i to its binary representation 𝐢=τ^-1(i) (defined in Eq.(<ref>)). Second, we permute the bit string 𝐢 by a bit-permutation σ on its binary digits such that σ∈ S_n, and we divide the permuted bit string σ(𝐢) into q parts, each part being of length m_1, m_2, ..., m_q respectively. We denote such a bit string as 𝐢_1𝐢_2𝐢_q, where 𝐢_k is the bit string of the k-th part of the permuted bit string σ(𝐢), and 𝐢_k is identified to a number in _2^m_k via τ. Finally, we define the group G by specifying its group operation *. Given two indices i and j, we get the corresponding 𝐢_1𝐢_2...𝐢_q and 𝐣_1𝐣_2...𝐣_q after applying the first two steps. Then as a part of adding i and j, denoted as i * j, we calculate a third sequence 𝐜_1𝐜_2...𝐜_q by 𝐜_k = τ^-1[τ(𝐢_k) + τ (𝐣_k) 2^m_k] . In the below equation, the group operation * between i and j is defined as i * j = τ(𝐜_1𝐜_2...𝐜_q) . Note that in general, the permutation σ can be chosen from S_2^n, but here we chose the so-called bit-permutation, which is a subset of all the 2^n! permutations. This is because bit-permutation can be easily implemented in quantum circuits; one simply uses SWAP operations to permute the digits. It requires exponential parameters to specify an arbitrary permutation but it takes only polynomial parameters to specify a bit-permutation. This efficiency difference in the number of parameters to specify a group structure inspires us to have the following definition: A group structure is said to be efficient if it is constructed by bit-permutations, instead of arbitrary permutations. In particular, when the permutation σ is identity, we call the corresponding assignment of group structure as the canonical way to assign a group isomorphic to _2^m_1×_2^m_2× ... ×_2^m_q. A more detailed introduction to assigning group structures over the set of indices {i} can be found in Appendix <ref>. The hidden subgroup compression of database. Suppose that the generating function f has a hidden subgroup H, with respect to a given group structure G on the set of indices i=0,1,2,...,2^n-1, so |G|=2^n, but that the value f may take on a given coset is not deterministic. A coset c_0 H for c_0∈ G is defined as the set {c_0 + h | h ∈ H}. Then the hidden subgroup H imposes a hidden pattern in the time series data generated by the function f, which is a redundancy that can be eliminated to achieve data compression. We call this kind of compression hidden subgroup compression, defined as follows, We are given a group structure G defined on the set of indices {𝐢} and a function f that hides a subgroup H<G. The task is to compress the database 𝒬_f that stores the value of a function f, 𝒬_f(𝐢) = f(𝐢) by constructing the characteristic function c that labels all free indices which can be overwritten, and a query function q that correctly retrieves the data upon query, 𝒬'_f(q(𝐢))=f(𝐢), for any updated database 𝒬' with data modified on free indices. The HSP database compression can be achieved by finding the generator of the hidden subgroup using any HSP algorithm: both the characteristic function c and the query function q can be efficiently constructed and evaluated using the generator. For example, let us consider Simon's problem <cit.> where the function f is assumed to have the symmetry that f(𝐢) = f (𝐣) if and only if 𝐢 = 𝐣⊕𝐬, where 𝐬 is an unknown secret bit string. This secret key 𝐬 is the generator of the hidden subgroup {0,𝐬}, where the group operation is bit-wise addition ⊕. After obtaining the secret 𝐬 using the HSP algorithm, we can construct the characteristic function c by c(𝐢) = 1 if 𝐢⊕ 𝐬 < 𝐢 , 0 otherwise, where we say 𝐢<𝐣 if and only if τ(𝐢)<τ(𝐣). In other words, for any pair of two indices 𝐢,𝐣 such that f(𝐢) = f(𝐣), we keep the smaller index and set the larger index free. Therefore, we can construct the query function q by querying the smaller index, q(𝐢) = min{𝐢, 𝐢 ⊕ 𝐬} . In this way, we free up half of the capacity of the database, which can be used to store new data. To find the secret key 𝐬, it is proved <cit.> that even the best classical HSP algorithm requires an exponential number of queries to the database 𝒬, but the quantum HSP algorithm requires only O(n) queries, which is exponentially faster. Then the exponential speedup of the quantum HSP algorithm implies an exponential speedup in database compression. More generally, let us analyze the query complexity of database compression for both classical and quantum algorithms and characterize when there will be an exponential speedup for quantum algorithms. For the Abelian hidden subgroup problem, the best known classical algorithm <cit.> has query complexity O(√(|G|/|H|)), while in contrast, the query complexity of the quantum HSP algorithm is O(log^4 |G|), which is always polynomial  <cit.>. This implies the following theorem, telling us when there is an exponential speedup in database compression using quantum algorithm. When the order of the hidden subgroup is not too large such that |G|/|H| = O(|G|), then there is an exponential quantum speedup in terms of query complexity in HSP database compression, compared to the best classical algorithm. In particular, for Simion's problem, |H|=2 and this theorem proves the exponential speedup in compressing the database that has the symmetry of Simon's problem. Compression ratio of HSP database compression. As a simple analysis, the least amount of memory bits κ to directly store the function values {f(0), f(1),....,f(|G|-1)} in the database is κ_direct(f) = |G| m = N m bits, where |G| is the cardinality of G, and m is the number of bits needed to specify a time series entry f(i). Exploiting any hidden subgroup structure, however, will reduce the memory required to a compressed amount κ_comp (f) = |G|/|H| m bits. To see this, consider the coset decomposition G = c_0 H ∪ c_1 H ∪ ...∪ c_k H . where c_0, c_1, ..., c_k are representative elements in different cosets. For an Abelian group G, these cosets form a quotient group G/H. The hidden subgroup structure implies that f takes constant values within each coset. Then we only need to store the function values over the representative elements of the quotient group G/H, whose cardinality is |G|/|H|. In other words, the characteristic function c is specified by c(i) = 0, if i ∈{c_0,c_1,...,c_k} 1, otherwise. And the query function q is specified by q(i) = i H , where the H operation is defined such that i = j H if and only if i-j ∈ H. The power and limitation of HSP database compression. Is the HSP structure general enough to be found in most databases? The answer is no. If the database has a HSP structure, then the number of duplicated values must divide the total number of indices N of the database. Then a natural question arises: when can we find an HSP structure in a database? And can any HSP structure be used Now let us discuss when HSP symmetry can be found in databases. For example, let us consider balanced sequences as the other illustrative example of HSP database compression. We, therefore, derive the following proposition: Let 𝒬_l be a database that stores a balanced binary sequence l:={f(0), f(1),....,f(N-1)} of length N, where “balanced" means the sequence has the same number of 0's and 1's. Then there exist a group structure G on the set of indices, under which the sequence stored in the database 𝒬 admits a hidden subgroup structure. This proposition can be proved by construction. Let G'= _N and H'={0,2,4,...,N-2} be a subgroup of G. Obviously, the binary sequence l'={0,1,0,1,0,1,...,0,1} has the hidden subgroup H'<G'. Now, let σ∈ S_N be a permutation that maps this binary sequence l' to l={f(0), f(1),....,f(N-1)}. Then it is easy to verify that l has a hidden subgroup H<G, where H and G are isomorphic to H' and G' via the permutation σ. Does the above proposition imply that the database storing an arbitrary balanced binary sequence can be compressed by its hidden subgroup? The answer is no. To see this, we simply count how many bits must be used to specify the group structure G under which the data admits a symmetry of HSP type. For a binary sequence of length N=2^n, there are N! isomorphisms for a given group G. So the amount of possible group structures is at least N!, and thus specifying one group among all the possibilities takes at least log_2 (N!)≈ N log_2 N bits; while storing directly the binary sequence of length N takes only N bits. Since the characteristic function c and the query function q depends on the specification of the group structure, it generally takes at least N log_2 N bits to store the function c and q. Then the number of bits used to specify the compressed data exceeds the bits to specify the original data and compression does not hold. The above calculation suggests that compression of all the balanced binary sequences is not possible unless the data sequence has further restrictions that reduce the number of parameters to specify a group structure G. Based on this reason, we consider those efficient group structures assigned by the bit-permutations (see Definition <ref>). There are only n! isomorphisms induced by bit-permutations, to specify which we only need O(n log_2 n) bits, suggesting a possible large compression ratio for large n. The exact number of bits of the compressed data is discussed in Sec. <ref>. An alternative way of generating and compressing a data sequence is via recurrence relations, which is discussed in Appendix <ref>. §.§ Solving general Abelian hidden subgroup problem via general quantum Fourier transformation. Now we briefly introduce the quantum hidden subgroup algorithm <cit.>. Familiar readers can safely skip this subsection. Given a finite Abelian group G defined on the set of indices of the data sequence and a generating function f that hides the subgroup H<G, we have the following quantum algorithm to find the hidden subgroup H. First, we prepare two registers in |0⟩|0⟩ and then apply the general quantum Fourier transform _G with respect to the group G (see Appendix <ref> for an introduction) on the first register, generating an equal superposition of all bit-strings |0⟩|0⟩1/√(|G|)∑_i∈ G|i⟩|0⟩ . Then apply the quantum oracle U_f to evaluate the function f, 1/√(|G|)∑_i∈ G|i⟩|f(i)⟩ = 1/√(|G/H|)∑_c ∈ G/H |c+H⟩|f(c)⟩ , where |c+H⟩ = 1/√(|H|)∑_h∈ H|c+h⟩ and the equality is due to f being constant in each coset. Measure the second register and condition on the outcome f(c_0) such that the post-measurement state of the first register is |c_0 + H⟩. Next, we apply the quantum Fourier transform _G again on the first register and we have |c_0 + H⟩1/√(|H^⊥|)∑_j ∈ H^⊥χ_j (c_0) |j⟩ , where χ_j(c_0) is the character of c_0 from a representation of the group G labeled by j, and H^⊥ ={j| χ_j(h) = 1 ∀ h ∈ H} is the orthogonal group of H. Measuring the first register in the computational basis and we get a sample j from the orthogonal group H^⊥. Repeat the process for k times and we obtain a set of random elements drawn uniformly from H^⊥. Using this set of sampled elements we can efficiently compute the generator of the group H by a classical algorithm. A detailed introduction to the quantum hidden subgroup algorithm with proofs can be found in Appendix <ref> and Appendix <ref>. The quantum Fourier transform is the fundamental basis for this protocol, and it can be efficiently implemented in quantum circuits (see Appendix <ref>). §.§ Generalized hidden subgroup compression by searching over group structures. Intuitively, it would give us an exponential speedup if we can apply the quantum algorithm solving the HSP shown in Sec.<ref> to compress a general time series data stored in a database. However, to successfully apply the algorithm and compress the data, we have to know the structure of group G; otherwise, we do not know which QFT should be implemented. For example, in Simon's problem, the generating function has a symmetry f(𝐢)=f(𝐣) if and only if there exists a bit string 𝐬 such that 𝐢=𝐣⊕𝐬, where ⊕ is bit-wise addition modulo 2. If we are given the time series data that could be either generated by a function with a period r or a function in Simon's problem, then we do not know which quantum circuit should be applied, because the quantum Fourier transform for period finding is __2^n while it is __2^n for solving Simon's problem (see Appendix <ref> for a detailed introduction). In practice, the time series data generally does not come with a pre-defined group structure that is a priori known. If we randomly assign a group structure G to the set of indices of the data sequence, it is highly possible that the patterns in the data sequence f(i) cannot be generated by any hidden subgroup of G and the algorithm simply fails (see Appendix <ref> for an example). Only when the generating function is promised to have a subgroup H with respect to a given group G, and more importantly when the group G must be known to us, can the standard quantum algorithm for HSP be directly used to compress the data. This limits the applicability of using the quantum hidden subgroup algorithm to compress real-world data. To circumvent the above limitation, one way is to modify the hidden subgroup algorithm and make it capable of searching for a suitable group structure to which the compression task admits a hidden subgroup. This motivates us to consider variational quantum circuits that can automatically adjust themselves to solve the problem upon changing their parameters, e.g. a transition from Simon's algorithm to period finding algorithm can be achieved by tuning its parameters. More generally, we can compress more types of time series data if we search over different types of group structure and different ways (isomorphisms) of assigning the group structure over the set of indices. Indeed, the time series data usually does not come with a specified group structure and the compression algorithm should determine which—if any—hidden subgroup symmetry is present. This motivates us to define the generalized hidden subgroup compression problem as follows. Let f be the generating function corresponding to the time series data {f(0),f(1),...,f(2^n-1)} and 𝒬_f a database that stores this time series data. We say that 𝒬_f has a hidden subgroup structure, if there exists an efficient group structure G=({0,1}^n,*) defined on the indices {i} and a subgroup H< G, such that f(𝐢 * 𝐡) = f(𝐢) for any i ∈{0,1,...,2^n-1} and 𝐡∈ H, and whenever 𝐠 is not an element in H, we have f(𝐢) ≠ f(𝐢*𝐠). The generalized hidden subgroup compression is to compress the time series data by finding the group operation * as well as the hidden subgroup H, by querying the database 𝒬_f, with a promise that such a group structure G is assigned efficiently by bit-permutations (see Definition <ref>). It is essential to emphasize that the solution to this generalized hidden subgroup compression may not be unique. Since our goal is to compress the data, we do not necessitate the algorithm to identify all possible solutions; discovering just one should suffice. For instance, consider the time series {0, 1, 2, 3, 0, 1, 2, 3}. This series exhibits multiple hidden subgroup structures, such as the hidden subgroup {0,4 } < ℤ_8. Additionally, it contains the hidden subgroup {(0,0,0),(1, 0, 0)} < ℤ_2^3. In the following section, we summarize our main results on solving this generalized hidden subgroup compression problem via variational quantum circuits. § MAIN RESULTS Our main result is a variational quantum algorithm solving the HSP without a priori knowledge about the group G. This is achieved by replacing the quantum Fourier transform in the hidden subgroup algorithm with a parameterized circuit, which can recover QFT over any group G upon a suitable choice of its parameters. We then insert this parametrized hidden subgroup algorithm into an autoencoder structure. An autoencoder has a bottleneck such that the data is compressed through the bottleneck by the encoder and then restored by the decoder, ensuring the compression of the encoder is reversible. During the training, there is an automatic search over suitable parameters such that the output of the decoder is as similar as possible to the input. After the training, the decoder can be removed. We first introduce how to design a parametrized circuit to implement QFT over any group G in Sec.<ref> and Sec.<ref>, and then show the parametrized circuit for HSP in Sec.<ref> and how the autoencoder is constructed to find the correct parameter configuration automatically in Sec. <ref>, which solves the generalized hidden subgroup compression problem. §.§ Parameterized quantum circuits for quantum Fourier transform over different types of groups. In this subsection, we present a parametrized quantum circuit that is capable of implementing the quantum Fourier transform over any type of Abelian group, with suitable choices of the parameters. We will begin with examples and then present the general architecture. Consider the groups G≅_8, _2×_4, _4 ×_2, or _2^3 (detailed definition of G is in Appendix <ref>). As proved in Appendix <ref>, the quantum circuit for the Fourier transform over _8 is @C=.3em @R=0em @!R i_1 2__8 H R_2 R_3 i_2 __8 .3em0em=.3em0em -1 H R_2 i_3 __8 -2 -1 H , where H is the Hadamard gate, and R_k = [ 1 0; 0 e^2 π/2^k ] is a phase rotation gate. The circuit for quantum Fourier transform over _4 ×_2 is @C=.4em @R=0em @!R i_1 2__4 ×_2 H R_2 i_2 __4 ×_2 .3em0em=.3em0em -1 H i_3 __4 ×_2 H . Note that we can also have the quantum Fourier transform over _2 ×_4, which is @C=.4em @R=0em @!R i_1 2__2 ×_4 H i_2 __2 ×_4 .3em0em=.3em0em H R_2 i_3 __2 ×_4 -1 H . And finally, the quantum circuit for the quantum Fourier transform over _2^3 is @C=.5em @R=0em @!R i_1 2__2^3 H i_2 __2^3 .3em0em=.3em0em H i_3 __2^3 H . Then it is clear that the quantum circuit of the Fourier transform over _4 ×_2, _2 ×_4 and _2^3 can be obtained by shutting down certain controlled phase rotation gates R_k in the QFT circuit of _8, followed by some swap operations among the qubits. In fact, three parameters will be sufficient to control the circuit. Denote the parameterized swap operation over all the three qubits as _θ⃗ = _12^θ_1 _23^θ_2 , where _ij^θ is the unitary that swaps qubit i and qubit j when θ =1, and it does nothing when θ = 0. Therefore, the following circuit @C=.3em @R=0em @!R 2_θ⃗ H R_2^θ_1 R_3^θ_3 2_θ⃗ _θ⃗ .1em0em=.1em0em -1 H R_2^θ_2 _θ⃗ _θ⃗ -2 -1 H _θ⃗ can implement the quantum Fourier transform over all considered types of groups by suitable parameters. With θ_1 = θ_2 = θ_3 = 1, it implements the quantum Fourier transform over _8; with θ_1=1, θ_2 =θ_3 = 0, it implements QFT over _4×_2; with θ_1 = θ_3 =0, θ_2 = 1, it implements QFT over _2 ×_4; and finally, with θ_1=θ_2=θ_3=0, it implements the QFT over _2^3. For simplicity, in the main result part we use parameter θ_i to represent the controlling parameter, but in the numerical simulation, θ_i will be replaced with sin^2(θ_i) to limit the value of the switch to be in the range of [0,1]. Generalizing the above example, we have the following theorem, stating that the parameterized quantum circuit can implement the quantum Fourier transform over any type of Abelian group. The parametrized quantum circuit of n-qubits constructed as follows can implement the quantum Fourier transform over any type of Abelian group _2^m_1×_2^m_2×...×_2^m_q with suitable choices of parameters, where m_1, m_2,...,m_q form an integer partition of n. The circuit is constructed by first applying the Hadamard gate H to every qubit and then consecutively applying V_k^θ⃗:= (R_n-k+1^n → k)^θ_k,n(R_n-k^n-1 → k)^θ_k,n-1...(R_2^k+1 → k)^θ_k,k+1 H_(k) for k=1,...,n-1, where θ_i,j's are the tuning parameters and where R_m^i→ j is the phase rotation operator on the j-th qubit controlled by the i-th qubit. Finally, we apply the parametrized SWAP gates _θ⃗ = _n-1,n^θ_n-1,n _n-2,n-1^θ_n-2,n-1 ... _1,2^θ_1,2. We will denote these parametrized circuits for quantum Fourier transform over different types of groups as _θ⃗, _θ⃗ = _θ⃗ V_n-1^θ⃗ V_n-2^θ⃗ ... V_1^θ⃗ , which has n(n-1)/2 parameters. It corresponds to the following quantum circuit @C=.3em @R=0.2em @!R 4_θ⃗ 4 V_1^θ⃗ 4_θ⃗ _θ⃗ V_1^θ⃗ 3 V_2^θ⃗ _θ⃗ 1em⋮ _θ⃗ 1em⋮ .1em0em=.1em0em 1em⋮ V_1^θ⃗ V_2^θ⃗ ⋯ _θ⃗ 1em⋮ _θ⃗ V_1^θ⃗ V_2^θ⃗ 1 V_n-1^θ⃗ _θ⃗ _θ⃗ V_1^θ⃗ V_2^θ⃗ V_n-1^θ⃗ _θ⃗ . To prove the theorem, it is direct to verify that the implementation of the QFT over _2^m_1×_2^m_2×...×_2^m_q, for t=0,1,..., q-1 can be achieved with parameters θ_k,k+1 = θ_k,k+2= ...= θ_k, s(t+1) = 1 , for k= s(t)+1, s(t)+2,...,s(t+1)-1, where s(t) := ∑_j=1^t m_j and s(0):=0, and we set all other parameters zero. In particular, when the parameters θ_i,j's are all zeros, it performs the QFT over _2^n and when the parameters θ_i,j's are all ones, it performs the QFT over _2^n. More importantly, if by training the circuits we get a solution of the parameters that are 0's and 1's, then the group type can be correspondingly determined. §.§ Parameterized quantum circuits for QFT over different isomorphisms of the same group. Searching over groups of the same type. Searching over all types of groups is not sufficient to solve the generalized hidden subgroup compression, because there are multiple ways of assigning this group structure on the set of indices (examples are provided in Appendix <ref>). We have to search over all kinds of ways (isomorphisms) to assign a group. This requires us to design a parametrized quantum circuit that can implement quantum Fourier transforms over different groups of the same type, which will be presented in this subsection. We start with a simple example and then give a full characterization. To illuminate the difference between the quantum Fourier transform over different groups that are isomorphic to each other, consider two groups G and G' that are both isomorphic to _8, with i = τ(i_1i_2i_3) = 2^2 i_1 + 2 i_2+ i_3 being the isomorphism from i ∈_8 to bit strings i_1i_2i_3 ∈ G and i=τ'(i_1i_2i_3)= 2^2 i_3 + 2 i_2 +i_1 being the isomorphism between _8 and G'. As proved in Appendix <ref>, the quantum circuit for Fourier transform over G is @C=.3em @R=0em @!R i_1 2_G H R_2 R_3 i_2 _G .3em0em=.3em0em -1 H R_2 i_3 _G -2 -1 H . Let the π be a bit-permutation acting on the digits of the bit strings such that π(i_1i_2i_3)=i_3i_2i_1. Then it gives the group isomorphism from G to G'. At the circuit level, π induces a unitary U_π = SWAP_13 that transforms elements in G to elements in G' by swapping the first and the third qubits. Then the Fourier transform over G' can be implemented by _G' = _13 _G _13 , which implies the following quantum circuit, @C=.3em @R=0em @!R i_1 2_G' H R_2 R_3 i_2 _G' .3em0em=.3em0em -1 H R_2 i_3 _G' -2 -1 H . Compared to the QFT over G, the only difference is the permutation of qubits before and after the same phase rotation gates. Now, we give a full characterization of the quantum circuits that implement quantum Fourier transforms over different groups of the same type. Let us fix G_0 as the canonical way of assigning the group _2^m_1×_2^m_2× ... ×_2^m_q. Let σ be an isomorphism from G_0 to a different group G_σ, σ : G_0 → G_σ i_1 i_2...i_n ↦ σ(i_1 i_2...i_n) . In fact, the isomorphism induces a unitary U_σ that maps the basis {|i_1i_2...i_n⟩} to itself via U_σ|i_1i_2...i_n⟩ = |σ(i_1 i_2 ...i_n)⟩ , for all bit strings i_1i_2...i_n. This unitary is well defined since σ is a one-to-one correspondence, meaning that U_σ^-1 U_σ = I and therefore U_σ^-1 = U_σ^†. Then it is easy to verify the following proposition, The quantum Fourier transform over group G_σ≅ G_0, is unitarily conjugate to the quantum Fourier transform over G_0, i.e. _G_σ = U_σ^† _G_0 U_σ , where σ is the isomorphism from G_0 to G_σ and U_σ is the associated unitary defined in Eq.(<ref>). As discussed in Sec.<ref>, there are 2^n ! ways of assigning any given type of group; because an isomorphism can be uniquely specified by a permutation of the 2^n elements of the bit strings of length n. And there are 2^n ! number of such permutations. Mathematically, denote S_2^n as the permutation group of the 2^n bit strings. Then any element of this permutation group corresponds to an isomorphism. Suppose θ⃗ parametrizes the permutation group S_2^n via a function σ, meaning that for any σ_0 ∈ S_2^n, we have a choice of parameters θ_0 such that σ_0 = σ (θ_0). Then we can parametrize the unitary U_σ_0 accordingly and get a parameterized circuit U_θ⃗ that can implement any U_σ. Combining the results of parametrized quantum circuits for QFT over different types of groups shown in Theorem <ref>, we have the following circuits which we denote as that can implement the quantum Fourier transform over any Abelian group G=({0,1}^n,*): @C=.3em @R=0.2em @!R 4UQFT_θ⃗ 4U_θ⃗^† 4QFT_θ⃗ 4U_θ⃗ UQFT_θ⃗ U_θ⃗^† QFT_θ⃗ U_θ⃗ 1em⋮ _θ⃗ 1em⋮ .1em0em=.1em0em 1em⋮ U_θ⃗^† 1em⋮ QFT_θ⃗ 1em⋮ U_θ⃗ 1em⋮ UQFT_θ⃗ U_θ⃗^† QFT_θ⃗ U_θ⃗ UQFT_θ⃗ U_θ⃗^† QFT_θ⃗ U_θ⃗ . However, to parametrize all the 2^n! unitaries, we have to use an exponential number of parameters; as a simple estimation, it costs O(log(2^n!))=O(n 2^n) binary numbers to label one permutation. Therefore, it is computationally intractable to have a parametrized quantum circuit that is capable of implementing all the 2^n! different quantum Fourier transforms. That is the reason why we consider the group structure that can be efficiently assigned by bit-permutations in Sec. <ref>. Searching over all bit-permutations. The family of groups that we aim to cover, is generated by a subset of the permutations of n bits which is commonly referred to as bit-permutations, instead of all the permutations of all the 2^n bit strings. This is inspired by Eq.(<ref>), where the QFT is obtained after applying a SWAP gate on the qubits, which has a simpler structure to implement. Let π∈ S_n be a bit-permutation of the digits in a bit-string of length n, where S_n is the symmetric group of n elements. Acting with this permutation on the group G=({0,1}^n, *) and we get a different group G_π =({0,1}^n, *') that is isomorphic to G. On the circuit level, the isomorphism π induces a unitary transformation, U_π|i_1i_2...i_n⟩ = |i_π(1)i_π(2)...i_π(n)⟩ , for any bit string i_1i_2...i_n. A good property of U_π is that it can be implemented by SWAP gates only. Using n(n-1)/2 parameters, we can implement U_π for any π∈ S_n. We will prove the above proposition by construction and the protocol is similar to the bubble sorting algorithm. First, we define the following parameterized gate by SWAP gates between nearest neighbors: _λ⃗_(k) = _k,k+1^λ_k_k-1,k^λ_k-1 ... _1,2^λ_1 , where λ⃗_(k) = (λ_1, λ_2,...,λ_k) encodes the k parameters. After applying a given permutation π∈ S_n, the i-th bit is permuted to the π(i)-th bit. To begin with, we map the π^-1(n)-th bit to the position n via the parametrized _λ⃗_(n-1) with parameters λ_π^-1(n), λ_π^-1(n)+1, ... , λ_n-2, λ_n-1 being 1 and all other parameters being 0. After this operation, we successfully permute the π^-1(n)-th bit to the n-th bit. Suppose these SWAP gates induces a permutation σ on the first n-1 bits, then after applying _λ_(n-1), the first n-1 bits become i_σ(1)i_σ(2)...i_σ(n-1) which we needed to transform to i_π(1)i_π(2)...i_π(n-1). Next, we apply _λ⃗_(n-2) with a similar strategy that maps the bit now in position σ (π^-1 (n-1)) to the (n-1)-th bit. Repeating the process until we put i_π^-1(2) at the second position, resulting in the outcome sequence of i_π(1)i_π(2)...i_π(n-1) we thus complete the implementation of the permutation π. The overall process can be implemented with suitable parameters by the following parameterized gate W_θ⃗, W_θ = _λ⃗_(1) _λ⃗_(2) ... _λ⃗_(n-1) , where θ⃗ = {λ⃗_(1), λ⃗_(2), ..., λ⃗_(n-1)} encodes the total n(n-1)/2 parameters. For example, the permutation π∈ S_4 such that π(i_1i_2i_3i_4) = i_4 i_3 i_1 i_2 can be implemented by the following circuit @C=.5em @R=0em @!R i_1 3W_θ⃗ i_4 i_2 W_θ⃗ i_3 i_3 W_θ⃗ .3em0em=.3em0em i_1 i_4 W_θ⃗ i_2 , with the parameter θ⃗ = {(1),(1,1),(0,1,1)}. Now we have the following parametrized quantum circuit that is able to implement quantum Fourier transform over any types of groups and over any bit-permutations of the same group, @C=.3em @R=0.2em @!R 4WQFT_θ⃗ 4W_θ⃗^† 4QFT_θ⃗ 4W_θ⃗ WQFT_θ⃗ W_θ⃗^† QFT_θ⃗ W_θ⃗ 1em⋮ _θ⃗ 1em⋮ .1em0em=.1em0em 1em⋮ W_θ⃗^† 1em⋮ QFT_θ⃗ 1em⋮ W_θ⃗ 1em⋮ WQFT_θ⃗ W_θ⃗^† QFT_θ⃗ W_θ⃗ WQFT_θ⃗ W_θ⃗^† QFT_θ⃗ W_θ⃗ . Obtaining group structure from parameters. Suppose for a valid setting of the parameters θ⃗, the parametrized quantum circuits implement the quantum Fourier transform over G_σ. Then we are able to find out the group G_σ from the values of the parameters θ⃗. This can be accomplished in two steps. First, we find out the type of the group by the parameters in , and next we find the permutations from the parameters in W_θ⃗. With this group structure at hand, we are able to compute the group operation between any two group elements and are therefore able to find out the hidden subgroup. More importantly, all these calculations can be accomplished by classical algorithms on a classical device. These parameters can be generalized from binary values 0,1 to real values in the interval [0,1]. Extending to the real valued regime is necessary for variational quantum algorithm, because we need gradient descent to find the appropriate parameters. The detail of this real-valued extension is discussed in Sec.<ref>. §.§ Parameterized quantum circuits for hidden subgroup problem with arbitrary group structure. Using the parameterized quantum circuit for QFT, we can easily construct the parameterized quantum circuit for HSP. For any group G, there exists a configuration of the parameters, using which this quantum circuit can generate samples of elements in the orthogonal group H^⊥. And we can use the sampled data from this quantum circuit to calculate the hidden subgroup H using classical post-processing. The circuit architecture of HSP_θ⃗ is summarized in the following proposition. Upon suitable configuration of the parameters θ, the following parametrized quantum circuits can solve the HSP over any groups that is assigned efficiently by bit-permutations (see Sec.<ref>), @C=.2em @R=0.2em @!R |i_1⟩ H 6U_f 3W_θ⃗ 3_θ⃗ 3W^†_θ j_1 |i_2⟩ H U_f W_θ⃗ _θ⃗ W^†_θ j_2 ⋯ H U_f W_θ⃗ _θ⃗ W^†_θ ⋯ |i_n⟩ H U_f W_θ⃗ _θ⃗ W^†_θ j_n |0⟩ U_f ⋯ U_f |0⟩ U_f where W_θ⃗ is the parameterized unitary corresponding to bit-permutations. We denote the above parameterized quantum circuit as HSP_θ⃗. With a suitable choice of parameters, it will output samples j_1j_2...j_n ∈ H^⊥ upon each run of the circuit. After many repetitions, we can calculate the hidden subgroup from the samples, using the algorithm shown in Appendix <ref>. §.§ Quantum autoencoder for hidden subgroup compression. Finding suitable parameters using quantum autoencoder. In the previous subsections, we have successfully constructed the parameterized quantum circuit that can implement the quantum Fourier transform over different groups, whenever suitable parameters are taken. These two ingredients are then used to construct the parameterized quantum circuit HSP_θ⃗ that solves the HSP for different groups. But how do we get these parameters θ⃗ in the first palace? Here we adopt the philosophy in variational quantum algorithms where we define a cost function and use gradient descent to find the suitable parameters. To this end, we use the architecture of quantum autoencoders. An autoencoder can be decomposed into two parts, the encoder ℰ and the decoder 𝒟. In general, the bottleneck, which filters sufficient features, in a quantum autoencoder can pass quantum states to the decoder. Since we want compression of classical data onto classical data, the bottleneck should be ensured as classical in order to get the classical compressed data. Thus we assume that the encoder and decoder are connected by a classical bottleneck upon which we make a measurement to get the compressed data. This is illustrated as follows: @C=.3em @R=0.2em @!R 3ℰ / 3𝒟 ℰ 𝒟 1em⋮ ℰ 1em⋮ 1em⋮ 𝒟 1em⋮ ℰ 𝒟 . Note that here the bottleneck can in general have more than one bit. The encoder extracts a classical feature of the time series, and the feature is regarded as the compressed data. Through the bottleneck, these compressed data are then passed to the decoder where the original time series can be reconstructed. Given that quantum circuits have been shown to be exponentially faster than the best known classical algorithm in finding hidden subgroups in input bits, it is plausible to expect a corresponding exponential advantage over classical neural nets in generalized hidden subgroup data compression. Cost function. How to define a cost function that measures the performance of this variational autoencoder? Suppose the reconstructed data is {f̂(0),f̂(1),f̂(2),...,f̂(N-1)}, then the distance between the reconstructed data and the original data can be used as the cost function. Here we use the Euclidean distance, obtaining the following expression of the cost function, C(θ) = ∑_i=1[ f(i) - f̂(i)]^2 , where the dependence on the parameters θ⃗ is implicit in f̂. Note that there are exponential terms in this cost function. To make the calculation efficient, we can randomly take polynomial samples to estimate the genuine cost. From now on, we assume that C(θ⃗) is estimated in this way. Minimizing this cost function will give us a quantum circuit that accomplishes the hidden subgroup compression. An important observation is that both the cost function and the parameters take integer values; this makes the problem a combinatorial one where gradient descent cannot be directly applied. Then how can we train the quantum circuit and find the suitable parameters? Here we consider the generalization making both the parameters and cost function real numbers, which is discussed as follows. Real valued parameters and cost function. To avoid ambiguity, we use γ⃗ to denote the generalized, real-valued parameter configuration, and keep using θ⃗ to denote the binary valued parameters configuration. And we suppose that there are q parameters γ_i ∈ [0,1] in each configuration γ⃗. When parameters of the quantum circuit become real numbers γ⃗, strictly speaking, the circuit is not performing quantum Fourier transform. In this case, how should we obtain the group structure and calculate the hidden subgroup? To solve this problem, we have to discretize the real-valued parameters γ⃗ back to the binary-valued ones. We randomly generate binary valued parameters configuration θ⃗ according to the parameters γ_i; specifically, we randomly generate θ_i ∈{0,1} according to the probability distribution ({θ_i = 1}) = γ_i ; ({θ_i = 0}) = 1- γ_i . In this way, the probability of getting a specific parameter configuration θ⃗=(b_1, b_2,...., b_q) is (θ⃗) = ∏_i = 1^q ({θ_i = b_i}) . For each binary configuration θ⃗ generated in this way, we can then use the classical algorithm to obtain the compressed data. In general, this compressed data cannot exactly reconstruct the original data, at the intermediate step in the training process. This gives us a positive, integer-valued cost C(θ⃗). Repeating this process many times, we can efficiently obtain the estimation of the average value of the cost function, 𝔼_γ[C] = ∑_θ⃗∈{0,1}^q(θ⃗) C(θ⃗) . This average cost takes real values, and it can be minimized through gradient descent. Minimizing this average cost function we will get the appropriate parameters that solve the HSP. In the following, we will introduce the detailed structure of the encoder and the decoder, followed by a discussion of how parameters are updated to achieve successful compression. Detailed structure of the encoder. Different from standard autoencoders, where the encoder and the decoder are composed of connected nodes with indefinite weights to be learned in the training process, we impose a special structure to the circuit ansatz to exploit the computational power of quantum circuits in identifying Abelian hidden sub-groups. This leads to a well-structured encoder (and decoder) presented as follows. We use a quantum-classical hybrid structure for the encoder, where the quantum part is used to speed up the Fourier transform and the classical part is used to post-process the data sampled from the quantum circuit to calculate the hidden subgroup. The procedure is summarized in Algorithm <ref> and explained as follows. The key idea is to apply the parameterized quantum circuit HSP_γ⃗ many times and then use a classical algorithm to find out the hidden subgroup. Specifically, we first apply the parameterized quantum circuits HSP_γ⃗ for HSP with real valued parameters γ⃗, and obtain a bit string 𝐣=j_1j_2...j_n. Repeat this process K times and then we have K such bit strings {𝐣^(k)}_k=1^K. These bit strings would belong to the orthogonal group H^⊥ if the parameters γ⃗ correctly configure the parameterized circuit HSP_γ⃗ exactly as the circuit solving the HSP of f, which is not the case as long as γ⃗ is not binary numbers. Then we randomly generate a configuration of binary parameters θ⃗ according to the probability distribution defined in Eq.(<ref>). This probability distribution does not guarantee that the parameter configuration generated gives a quantum circuit that solves a hidden subgroup. For example, θ_3 = 1 while θ_1=0 in Eq.(<ref>) does not correspond to any quantum Fourier transform. So we need to check the validity of the parameters θ⃗ generated. For each instance of the parameters, we check if the parameterized circuit HSP_γ⃗ corresponds to a quantum circuit that solves the HSP for some certain group G. If a legitimate parameter configuration θ⃗ is sampled, we then use the classical algorithm (see Appendix <ref>) to solve the corresponding group structure G. With the group structure G specified, we can find out which congruence equations (see Appendix <ref>, Eq.(<ref>)) are used to find out the generator 𝐬 = s_1s_2 ⋯ s_n of the subgroup H. Since the linear congruence equation systems always have solutions, we can always get such a subgroup H<G. However, H is not guaranteed to be the real hidden subgroup of f at this intermediate step. We have to optimize the parameters to increase the probability of finding the correct hidden subgroup of f. This optimization is carried out by passing the results to the decoder, calculating the average cost function and then using gradient descent. In fact, the parameter θ⃗ can be used to describe the group structure. We then encode θ⃗, the generator of H, and the data points (c,f(c)) for c∈ G/H into a classical message σ_classical; this classical message is regarded as the compressed data. Pre-training of the encoder. In practical training, we discovered that finding the generator of the hidden subgroup by directly applying Euclid's algorithm to solve linear congruences (e.g. Eq.(<ref>)) is ineffective. Instead, we employ a pre-training method to make the autoencoder functional. During intermediate training steps, the correct Fourier transformation remains undiscovered, and thus many incorrect bit strings are sampled (all bit strings are possible in general). This results in the congruence equations having only the trivial solution, i.e., all bits being zero. Consequently, the classical post-processing continuously outputs the wrong generator (e.g. the secret key in Simon's algorithm) of the hidden subgroup as s=0. With this incorrect generator, the cost function remains large, and gradient descent often fails to improve the situation, leading to a barren plateau phenomenon with the output stuck at s=0. To circumvent such barren plateaus, we introduce a pre-training technique to assist the encoder in identifying a non-trivial generator s, although this s is not guaranteed to be the correct hidden subgroup's generator. The autoencoder training is thus split into two stages. First, the encoder is pre-trained to output a non-trivial s (i.e., s≠0). In the second step, the hidden subgroup's generator s is sent to the decoder to reconstruct the data sequence and be evaluated by the overall cost function shown in Eq. (<ref>). The autoencoder is then trained as a whole to obtain the correct generator s. During each iteration in the overall training, we apply the pre-training technique described below to ensure the encoder outputs a non-trivial generator s≠0. To help the encoder output a non-trivial generator s, we incorporate a new parameter set S⃗, where each S_i takes values in the interval (-0.5,1.5). Each S_i controls the probability of the i-th digit s_i=1 of the generator 𝐬=s_1s_2… s_n as follows: P({s_i=1}) = 0, if S_i<0 1-S_i , if 0⩽ S_i<1 1 if S_i⩾ 1 . The probability for s_i=0 is determined by the total probability equation P({s_i=0}) + P({s_i=1}) =1. Consequently, the probability P(𝐬) of the bit string 𝐬=s_1s_2...s_n can be expressed as a function of the real parameter set S⃗. Now we train both θ⃗ and S⃗ to find a non-trivial generator s. Take Simons' problem as an example (see Appendix <ref> for a detailed introduction to Simon's algorithm). Suppose that the k-th sampled bit string after applying the intermediate HSP algorithm HSP_θ⃗ is j^(k), then we have the following linear congruence relations to solve for the generator s (proof in Appendix <ref>), j^(1)_1 s_1 + j_2^(1) s_2 + ⋯ + j_n^(1) s_n = 0 2 j^(2)_1 s_1 + j_2^(2) s_2 + ⋯ + j_n^(2) s_n = 0 2 ⋮ j^(k)_1 s_1 + j_2^(k) s_2 + ⋯ + j_n^(k) s_n = 0 2 . When 𝐬 is not a solution to the linear congruence system, we can use C_L(𝐬) to quantify the discrepancy and therefore the cost, C_L(𝐬) = ∑_i=1^n (j^(k)_is_i 2) . Suppose the probability distribution of output bit string 𝐣=j_1j_2...j_n is P(𝐣), then we define a cost function C_E for the pretraining of the encoder, C_E = ∑_𝐬∈{0,1}^n P(𝐬) [∑_k =1P(𝐣^(k))C_L(𝐬)]+C(S⃗), where C(S⃗) = ∑_i sin^2 (π S_i) is a penalty term introduced to push each S_i takes value 0 or 1. Minimizing this pretraining cost function C_E and we can get a non-trivial generator s. When the cost value C_E reaches 0, it means we find a plausible generator s, and then it is sent to the decoder for further processing and training. Compression ratio of the encoder. Now let us calculate the size of the compressed data σ_classical and its compression ratio. Note that we do not try to optimize the compression ratio but to demonstrate how the variational compression by hidden subgroup algorithm works. Essentially the compressed data size as we analyzed here can be further reduced. We leave such a possibility for future investigations. According to our circuit architecture shown in Theorem <ref> and Theorem <ref>, it takes n(n-1) bits to specify the parameter configuration θ⃗. Moreover, n bits are needed to specify the generator of H and |G/H| (m+n) bits are used to store the data points (c,f(c)) for each coset c∈ G/H, where m is the number of bits to represent f(c). In total, it takes no more than n^2+|G/H| (m+n) bits for the compressed data, while it takes |G| m bits to store the full sequential data. Since |G|=2^n, we get a compression ratio κ = n^2/2^n + m+n/|H| , which is smaller than 1 for sufficiently large n and |H|. For example, let us consider the subset of all balanced binary sequences of length 2^n that have hidden subgroups over some group structure assigned efficiently by bit-permutations (recall from Sec.<ref> the definition of efficiently-assigned group structure). Now the subgroup H has cardinality 2^n-1, which gives us a compression ratio κ = 2 n ^2 + n/2^n. Note that this does not imply that any balanced binary sequence can be compressed by this amount (see the discussion below Proposition. <ref>). Detailed structure of the decoder. The encoder outputs a description of the subgroup H, the group structure G and the values (𝐜,f(𝐜)) for the quotient group 𝐜∈ G/H. Using this information, the decoder can reconstruct the original time series by, i) creating a list of length N, and ii) for each 𝐡∈ H and each 𝐜∈ G/H, store the value f(𝐜) into the position τ(𝐜*𝐡) of the list, where * is the group operation of G that is obtained from the compressed data. A more efficient decoder does not generate the full time series but it outputs f(i) upon a query on i. Full structure of the autoencoder and the description of the training process. Now, we can combine the encoder and the decoder to make an autoencoder that automatically finds the hidden subgroup. Given a parameter configuration γ⃗, we run the encoder once and then pass its output to the decoder to get the reconstructed data. Repeat T=O(n) times and calculate the average cost 𝔼_γ⃗C. Then we slightly change the γ_i to γ_i' and calculate the updated average cost 𝔼_γ⃗'C. Then the partial derivative of the average cost function can be estimated by ∂ 𝔼_γ⃗C /∂ γ_i≈𝔼_γ⃗'C - 𝔼_γ⃗C/γ_i' - γ_i . Consecutively change the value of γ_i and we can calculate the gradient ∇𝔼_γ⃗C = (∂_γ_1 𝔼_γ⃗C,…, ∂_γ_q 𝔼_γ⃗C). If the gradient is smaller than a certain predefined threshold, then the parameter γ⃗ is good enough for the hidden subgroup compression. We just output the classical message from the encoder σ_classical and the problem is solved. Otherwise, we need to update the parameters, according to the following equation γ⃗γ⃗ - β ∇ 𝔼_γ⃗C , where β is the learning rate and a b means assigning the value b to the variable a. The above procedure is summarized in Algorithm <ref> and is numerically tested in Sec.<ref>. An important observation is that this algorithm not only compresses the data; as a by-product, it also solves the hidden subgroup of the generating function f without a priori knowledge of the group structure G. In this sense, Algorithm <ref> is also a variational quantum algorithm solving the generalized HSP. § NUMERICAL RESULTS In this section, we will numerically check how our quantum algorithm performs by using a classical computer to simulate it. First, we will describe how we generate the training data: data strings with a certain symmetry and corresponding group structures. In line with Eq.(<ref>), for the numerical implementation, we can generate the simplest data series based on f(i) = f(j) iff i = j+s, where s is the secret bit string which is a member of a hidden subgroup. In particular, we shall feed our algorithm data that either: (i) is periodic, with a hidden subgroup of _8, (ii) satisfies Simon's symmetry, with a hidden subgroup of _2 ×_2 ×_2, or (iii) has the symmetry of a hidden subgroup of _2×_4. As an example, we now describe the generating process in the 6 qubits case, where the length of the data string should be 2^6/2 = 8. Taking Simon's problem as an example, the data series set could be {[ 0 0 1 1 2 2 3 3,; 0 1 0 1 2 3 2 3,; 0 1 1 0 2 3 3 2,; 0 1 2 3 0 1 2 3,; 0 1 2 3 1 0 3 2,; 0 1 2 3 2 3 0 1,; 0 1 2 3 3 2 1 0 ]} where each row is a time series that we could choose to compress. The (i,j) entry of the table is f_i(t_j) which is binarised in the simulation according to Eq. (<ref>). The rows are sorted by the value of s. Consider the first row. In this specific case, the symmetry is the repeating of data in the sense that f_1(1) = f_1(2), f_1(3)=f_1(4), f_1(5)=f_1(6), f_1(7)=f_1(8). In terms of bits, f_1(000) = f_1(000+001), f_1(010)=f_1(010+001), f_1(100)=f_1(100+001), f_1(110)=f_1(110+001), where + denotes bit-wise addition modulo 2 and usually be denoted as ⊕ in this scenario . We could thus find a type of addition operation “+”, which corresponds to the group structure, and secret bit string s (here 001) such that f_1(i)=f_1(j) iff j = i+s. In that case, s is from a hidden subgroup of _2 ×_2 ×_2. The first row is also consistent with s coming from a hidden subgroup of _4 ×_2, such that there is addition modulo 4 on the first two bits and addition modulo 2 on the final bit. There are multiple possible time series for the given hidden subgroup; in this example one can see 4!=24 possibilities. We can randomly choose 19 types of data series to train the circuit and 5 types of data series to test the circuit. In the simulations undertaken here testing the performance on the test data was perfect, since the circuit converged to the correct HSP circuit. Following this approach, we can generate other matrices of training data according to different group structures and secret bit strings. We show in Fig. <ref> and Fig. <ref> how our Algorithm <ref> and Algorithm <ref> perform in terms of compressing the data thus generated, for 6,8 and 10 qubits quantum circuits. Fig. <ref> compares the cost function during training with that of a classical autoencoder. As expected the quantum method manages to learn to compress the time series and the classical autoencoder does not. What needs to be emphasized here is that we treat the task as a compression task, which means we can rank the performance of circuits according to the distance between the input and output data series. Our circuit also works for different tasks, like _8 and _2 ×_2 ×_2. In both of the cases the training successfully converged to the corresponding group structure and sent the expected number through the bottleneck: s for Simons' problem or r for the periodic symmetry case. Compared to the classical autoencoder, we observe an advantage in accuracy, as depicted in Fig.<ref> in these specific case. The classical autoencoder has 8 neurons for input, 8 neurons for output and one 3-bit hidden layer since s is 3 bits; thus it has an 8-3-8 structure. We emphasize that this is an illustrative example chosen to be difficult for classical autoencoders. We do not claim that this HSP quantum autoencoder will in general outperform classical autoencoders. We expect the HSP quantum autoencoder to outperform any classical autoencoder for large scale time-series with HSP symmetries specifically. We extend the size of our quantum circuit to 8 and 10 qubits, generating our data set in a similar way as for the 6 qubits case. Also in the 8 qubits case, our algorithm can reversibly compress different data strings with different secret bit strings and group structures. An example is given in Fig.<ref>. For the 10 qubits case, we only tested the secret bit string s = 10000, due to the time cost of using a classical computer to simulate a quantum one. The compression was successful also for this case, as depicted in Fig.<ref>. Thus, the algorithm successfully compressed the information for both the 8 and 10 qubits cases considered. Suppose there are 2^n indices and each data entry takes m bits to store. Then it takes at least 2^n m bits to directly store this database. Now our goal is to shrink the size of this database and save space in its storage, based on the symmetries of the database. For example, suppose that a database 𝒬_f stores the value of a function f, such that it returns f(i) upon querying on the index i, where f obeys the Simon's symmetry, f(i) = f (i ⊕ s) for secret bit string s. Then this database can be reproduced by a database 𝒬' that stores only the values of f on its cosets. Also, we can make a convention that if i = j ⊕ s, then we store s_1, When we query on i, then If we are given the secret key s, then we can compress the database 𝒬_f using the following procedure. We calculate the cosets and store the value in another database, which takes 2^n-1 query. On the other hand, to classically compress the database without prior knowledge of s, requires also O(2^n) query. This means that there is no exponential advantage, for Simon's problem. If it is not Simon's problem. If we want to compress a balanced database. Quantum mechanically, we can query the database polynomial time to calculate the generator of H. And then we establish a new database, storing f(0), and f(c). Upon querying i, we verify if i is inside the subgroup H. If it is, we output f(0). If not, we output f(0)⊕ 1. However, the problem is, we do not know whether classical algorithm can do better. f(i) = 0 if i ∈ H, and f(i) = 1 otherwise. Suppose that the data is stored in the structure of datapoints. But suppose that there are many duplicated value in the database, but the pattern of the duplication can be complicated. Compressing the database and we have a smaller storage cost. Our goal is to reduce the memory cost, allowing a slightly more computational cost. If the database encodes the Simon's problem. Then optimizing the database is hard in terms of query complexity. If the database has a hidden subgroup structure, then compressing it to the ration of, costs exponential query. While quantum computation can boost it. Now let us calculate the scaling of the query complexity. Suppose G = _p_1^m_1× ... ×_p_q^m_q then the cardinality of G is N = |G| = p_1^m_1 p_2^m_2... p_q^m_q. Because of Sylow theorems, _p_1^m_1 has m_1 + 1 subgroup and thus there are in total ∏_i=1^q (m_i + 1) subgroups of G. Then the cardinality of the set of nontrivial subgroups ℋ is |ℋ| = ∏_i=1^q (m_i + 1)-2, which is of O(poly(log)). This means, that when the group structure is determined, the number of possible subgroups is small, compared to the cardinality of the group. In Refs.[], they proved that the query complexity for a classical algorithm to solve this HSP is upper bounded by Ω(min_H ∈ℋmax{log |ℋ|/log|G|/|H|, √(|G|/|H| log |ℋ|/log|G|/|H|)}) . After identifying the generator of the hidden subgroup, it takes exactly |G|/|H| queries to fetch the stored values in the database, for both classical and quantum algorithm. The query complexity of the database compression via HSP for calssical algorithm is upper bounded by Ω(min_H ∈ℋmax{log |ℋ|/log|G|/|H|, √(|G|/|H| log |ℋ|/log|G|/|H|)}) + |G|/|H| . Now, if we need the quantum query complexity to be polynomial, then |G|/|H| must be O(poly (n)). However, when |G|/|H| = O(poly (n)), we also have the upper bound of the query complexity of classical algorithm as polynomial. Ω(min_H ∈ℋmax{O(log n)/log O(poly (n)), √(O(poly (n)) O(log n)/log O(poly (n)))}) + O(poly (n)) . There cannot be an exponential speedup of the quantum algorithm in HSP database compression. § QUANTUM THERMODYNAMICAL ADVANTAGE INDUCED BY QUANTUM COMPUTATIONAL ADVANTAGE IN ENERGY HARVESTING. In this subsection, we discuss the thermodynamic significance of our results on information compression. Quantum advantages for thermodynamical tasks are being sought in coherent accurate information processing <cit.>, in charging time of batteries <cit.>, and in the efficiencies of thermal machines <cit.>. We will argue that the computational advantage of quantum computers results in a thermodynamical advantage in energy harvesting, such that with a quantum computer, it is possible to extract more work from a source in a fixed amount of time. Information can fuel energy harvesting together with heat energy; having the knowledge of the system will enable work extraction that is otherwise impossible. For example, in the context of the famous Szilard engine, if we know whether a single particle gas is in the left chamber or in the right, corresponding to having 1 bit of information, we can extract k_B T ln 2 amount of work by letting the system undergo an isothermal expansion at temperature T, where k_B is the Boltzmann constant. This process is illustrated in Fig. <ref>. Generalizing the example, previous work <cit.> has shown that, the amount of work W extracted from an n-particle Szilard engine is upper bounded by (n-S)k_B T ln 2, where S is the entropy (lack of information) of the system. This result has been generalized to finite-sized and internally correlated systems using the smooth entropy approach <cit.>. Here we use a similar set-up as in Ref. <cit.>, but instead of single molecule gas, we use spins since they are realistic qubit models that are easier to implement <cit.>. Formulating the problem in terms of spins is equivalent to the single molecule gas model from an information-theoretic point of view <cit.>. As depicted in Fig. <ref>, we consider a source that generates pseudo-random spin sequences, where spin up encodes bit 0 and spin down encodes bit 1. Assuming that the marginal distributions of 0 and 1 are uniformly random, then the standard Szilard engine acting on each spin independently cannot extract any work. However, even though each spin is uniformly random, the patterns in the spin sequence can be highly non-trivial and such patterns can be used to extract energy <cit.>. In fact, Bennett <cit.> points out that correlations among spins can be exploited to extract work, even if the marginal distributions of each spin are uniformly random. In the simplest example, if two spins are correlated in such a way that they are pointing to either both up or both down, e.g. 1/2(|00⟩⟨00|+ |11⟩⟨11|), then we can apply a CNOT operation and transform it to 1/2(|0⟩⟨0|+|1⟩⟨1|)⊗|0⟩⟨0|, and can therefore deterministically extract k_B T ln 2 work from the second bit. More generally, we can use a unitary transformation to convert non-uniformly distributed long spin sequences into shorter, uniformly-random spin sequences, with a fixed number of spins appended at the end, as shown in Fig. <ref>. To demonstrate the quantum advantage in energy harvesting, let us consider a work extraction competition for classical and quantum intelligent extractors trying to extract work from a pseudo-random spin source, where the classical extractor can only use classical neural nets or any classical algorithm, but the quantum extractor can access a quantum computer and run our algorithm. Specifically, let 𝒫⊂{0,1}^n be the set of all the binary strings with length n that have the same hidden subgroup structure H<G, for a fixed but unknown H and G, as specified in Sec. <ref>. Then for each run of the experiment, the source will, e.g. uniformly randomly, output a sequence in 𝒫. The goal of intelligent extractors is to maximize the work extraction from this sequence within a reasonable amount of time. The competition is divided into two stages. At the first stage, the intelligent extractors are allowed to access the source for a polynomial amount of time and learn the patterns in the sequence generated. In the second stage, within a polynomial amount of time, the extractor will extract work from the source, and the extractor that has the largest average work extraction will win the competition. We argue here that a quantum intelligent extractor will beat its classical counterpart and extract more work, especially when the length of the spin sequence n scales up, turning the quantum computational advantage into a quantum thermodynamical advantage in energy harvesting. Indeed, there is no known polynomial-time classical algorithm that can compress this pseudo-random sequence; otherwise, it will not only imply an efficient classical algorithm for the HSP, but also a polynomial-time statistical witness to differentiate a pseudo-random sequence from a genuinely random sequence, both of which are believed impossible for classical algorithms <cit.>. However, our algorithm gives an efficient way to extract the largest amount of work from this pseudo-random source. In a training process, our variational algorithm can be applied to learn the hidden subgroup structure of the sequences. And after training, we can construct a unitary that maps the pseudo-random sequences of n spins into sequences whose first k spins are uniformly random and the last n-k spins are all pointing up. Then the standard Szilard engine can be applied to the last n-k spins to extract (n-k) k_B T ln 2 amount of work. In this way, the quantum extractor will beat the classical extractor and demonstrate a quantum thermodynamic advantage in energy harvesting. Moreover, using the defining property of free energy as being the maximal extractable work, effectively the quantum agent assigns much higher free energy to the system. § SUMMARY AND OUTLOOK. We present a novel variational quantum algorithm for non-linear information compression, utilizing the hidden subgroup problem (HSP). Our approach involves designing a versatile quantum circuit ansatz that encompasses a broad range of the Abelian HSP scenarios. This ansatz provides multiple options for extracting features from input data, such as identifying the period, which can be selectively passed through a bottleneck or discarded. Furthermore, we leverage the HSP framework to discover functional relations within the data. Although we only demonstrate a simple case in this study, it is worth exploring more complex cases, such as long-term related data, as potential directions for future investigation. In addition to theoretical advancements, we successfully conducted classical simulations of the algorithm using a small number of qubits. These simulations showcased superior performance compared to a classical autoencoder in a simple illustrative example. While these simulations do not establish a general advantage, they solidify the effectiveness of our algorithm and shed light on the types of examples where we anticipate outperforming classical autoencoders. Moreover, our algorithm offers an additional application: enhancing the energy extraction capabilities of an intelligent extractor from a pseudo-random source. This transformation enables us to leverage the quantum computational advantage to achieve a quantum thermodynamic advantage in work extraction. Our work links the quantum exponential speed-up to data compression and presents a promising new direction for quantum autoencoders and HSP, paving the way for future investigations in the non-Abelian HSP context. While practical implementations may require simpler gate Ansatzes and the impact of noise should be further explored, our (almost) exponential advantage is based on the widely accepted conjecture of quantum HSP's superiority over classical algorithms. Like with other variational algorithms we have not provided a guarantee the algorithm will in general converge. Perhaps the most important question this work points to, is to what extent HS symmetries are present, exactly or approximately, in data that one wishes to compress. § ACKNOWLEDGEMENTS. We gratefully acknowledge discussions with Shyam Dhamapurkar, Nana Liu, Sirui Ning, Swati Singh, Tian Zhang, and support from HiSilicon. OD acknowledges support from the National Natural Science Foundation of China (Grants No. 12050410246, No.1200509, No.12050410245) and the City University of Hong Kong (Project No. 9610623). § DATA SEQUENCE GENERATED BY RECURRENCE RELATION AND COMPRESSION BY THE RECURRENCE RELATION.  In the main text, we have seen data generated directly by a generating function f. Another way to generate the time series data is via recurrence relations. For example, the arithmetic sequence {1,2,3,4,....} is generated by the recurrence relation x_n = x_n-1+1 with initial value x_1=1 and the geometric sequence {1,2,4,8,16,....} is generated by the recurrence relation x_n = 2 x_n-1 with initial value x_1=1. e.g. the Fibonacci sequence is generated by the second order recurrence relation x_n = x_n-1 + x_n-2 with initial values x_0=0 and x_1 = 1. It is said to be of the second order because the next value is determined by the previous two values, and k-th order recurrence can be defined similarly. The first-order recurrence relation is the simplest because it only depends on the previous value, which is similar to the Markovian condition. We can generate the time series data if there is a recurrence relation, x_i = g (x_i-1) , which is said to be of first order since it only depends on the previous value. More generally, the recurrence relation can depend on more than one previous value, but for simplicity, we only consider the first-order recurrence in this paper. If we can find out the recurrence relation g, then the time series data can be compressed to {x_0, g}, which contains only the initial value x_0 and the description of the function g. Compression via hidden subgroup in the recurrence relation. Similar to hidden subgroup compression defined for the generating function, we can also formulate hidden subgroup compression for recurrence relations. Now we define a group structure G with each data point x_i being the group element. Then the recurrence relation g:G→ G can be regarded as a group function where a hidden subgroup can be defined. We say the recurrence relation admits a hidden subgroup H, if for all x_i, x_i+1 = g(x_i) = g (x_i + h ) , ∀ h ∈ H , where H is a subgroup of G. Now a particular time series becomes S_x_0={x_0, g(x_0), g(g(x_0)), ...., g^[N-1](x_0)} , which is generated by this recurrence relation with initial value x_0, where g^[N](x_0) means applying g iteratively N times on x_0. The generating function for this time series data is f(i) = g ^[i](x_0). In fact, this generating function does not necessarily have a hidden subgroup, even though it is generated by a recurrence relation that does. When the generating function f has a hidden subgroup, it can be compressed by the hidden subgroup compression; and when it doesn't, the time series data cannot be compressed by the hidden subgroup. However, in either case, the data set of time series generated by different initial values, 𝒮={(x_0, S_x_0)|x_0 ∈ G } can be compressed. Because S_x_0 +h is exactly the same as S_x_0 starting from the second data points g(x_0). We can store S_x_0 with x_0 being a representative in G/H and restore the value S_x_0+h using the hidden subgroup structure, where we regard x_0 as a parameter of the data set and formulate this compression as finding the hidden subgroup over the parameter x_0. The above idea of parametrization can be extended to situations where the time series data {x_0(y),..., x_N-1(y)} is parametrized by a group element y∈ G; this is another situation where we can compress the data set via hidden subgroups in the recurrence relation. Suppose the time series data is generated by a known recurrence relation x_i (y) = g(y, x_i-1) , with y being a parameter. Then different values of y will generate different time series data upon the same initial value x_0, which we denote by S_x_0,y. Suppose that the recurrence relation g has a hidden subgroup H<G over the parameter y, meaning that for any x_i-1, g(y+h,x_i-1) = g(y,x_i-1) , ∀ h ∈ H . Then the data set 𝒮={(y,S_x_0,y)|y ∈ G} can be compressed by finding the hidden subgroup of the recurrence relation, and we only need to store (y, S_x_0,y) for y ∈ G/H and other time series S_x_0, y+h can be restored from S_x_0,y. Furthermore, if y can be solved given the time series {x_0(y),..., x_N-1(y)} and the functional form of g, then the time series S_x_0,y can be compressed as {x_0,y}. This corresponds to the third level compression discussed in Sec. <ref>, where we compress the data by determining the exact functional form of its generation. § ASSIGNING GROUP STRUCTURES TO TIME SERIES BY ABELIAN GROUPS ON BINARY STRINGS. Classification of finite Abelian groups. To incorporate hidden subgroup structures into the time series data {f(0), f(1), f(2),... }, we have to adopt a group structure to a set of indices i=0,1,2,3,..., making it a group G=({i},*) with * being the group operation and i's being the group elements. In fact, the fundamental theorem for finite Abelian groups states that any finite Abelian group is isomorphic to G ≅_p_1^m_1×_p_2^m_2× ... ×_p_q^m_q , where _p is the shorthand notation of the additive integer modulo group /p, p_i's are prime integers and m_i's are positive integers that are both not necessarily distinct. Moreover, we must have |G| = p_1^m_1× p_2^m_2× ...× p_q^m_q , which means p_i's must be factors of the group cardinality |G|. Restricting our discussion to groups whose elements are binary strings. In principle, we can impose any finite Abelian group structure G of the form (<ref>) to the set of indices { i|i=0,1,...,|G|-1}, as long as Eq.(<ref>) holds. However, the implementation of the algorithm at the circuit level makes a difference. Quantum algorithms are implemented on quantum circuits, which contain single-qubit and two-qubit unitaries acting on a set of qubit systems. The input and output are encoded in the qubit states, which are essentially bit strings. This implies that only those groups whose cardinality are powers of 2 are the simplest to encode and compute on a quantum circuit; otherwise, procedures of translating group elements into binary numbers and group operations into binary operations must be applied, which will increase the difficulty of the algorithm implementation on the circuit level and introduce unnecessary complexity to our discussions. Our work aims to demonstrate the basic ideas of using a quantum hidden subgroup algorithm to compress time series data, so we focus our discussion on groups that satisfy |G|=2^n for an integer N. Our results can be generalized to arbitrary finite Abelian groups, and we leave such generalization for future study. For simplicity, we assume the time series data is of fixed length 2^n, which means that the time is running from i=0, to i = 2^n-1. This allows an exact identification τ:{0,1}^n →{0,1,2,...,2^n-1} from any binary string of length n to the index i, i = τ (i_1 i_2...i_n-1i_n) = i_1 2^n-1 + i_2 2^n-2 + i_n-1 2 + i_n , where i_q is the q-th digit of the binary string i_1 i_2...i_n. And given the number i, the bit string i_1i_2...i_n = τ^-1(i) can be calculated by division. For example, 0=τ(000), 2=τ(010) and 6=τ(110). With this map τ, we identify the index i in its decimal format to its binary representation i_1i_2...i_n, whenever there is no ambiguity. Now, with |G|=2^n, only groups in the following form G ≅_2^m_1×_2^m_2×...×_2^m_q will be allowed by relation (<ref>), where m_i's are positive integers that are not necessarily distinct and satisfy m_1 + m_2 +...+m_q = n . If G satisfies Eq.(<ref>), we say that G is of the type (2^m_1, 2^m_2,...,2^m_q). Group structure over binary strings. The next question is, how to impose a group structure G to the bit strings {0,1}^n and therefore to all the set of indices {i}? Let us begin with a simple example of assigning the group structure G_0 that is isomorphic to _4 to the set of time values (in its binary representation) 00,01,10,11. Since G_0 is isomorphic to _4, we can specify the group G_0 by establishing a one-to-one correspondence between elements in G_0 to elements in _4, say _4 = { 0, 1, 2, 3 } G_0 = { 00, 01, 10, 11 } , where we map 00 to 0, 01 to 1, 10 to 2 and 11 to 3. However, this is not the only way to establish one-to-one correspondence; other alternative isomorphisms are _4 = { 0, 1, 2, 3 } G_1 = { 01, 10, 11, 00 } G_2 = { 10, 11, 00, 01 } ⋮ . In total there are 4!=24 ways of specifying the one-to-one correspondence and thus assign the group structure _4, among which G_0 is the most canonical one since the correspondence is given by the binary representation τ defined in Eq.(<ref>). Among these correspondences, there is a special class of isomorphisms that can be induced by permutations of the bit strings. For example, let us exchange the first bit and the second bit and then we have the following correspondence _4 = { 0, 1, 2, 3 } G_0 = { 00, 01, 10, 11 } G_3 = { 00, 10, 01, 11 } . More generally, a group structure on the set of indices can be assigned by establishing a one-to-one correspondence between elements of the index (and thus the bit strings of length n) and elements in the group of _2^m_1×_2^m_2×...×_2^m_q. In total, there are 2^n! ways of assigning the group structure, which is the number of different one-to-one correspondences. 2^n! is a huge number, and in the following, we will try to characterize a smaller subset of the groups that can be easily implemented on the circuit level. Specifying the group via binary operations. Another way of assigning the group structure on the set of indices is via the group operation. A group is defined by specifying the elements in the group and specifying the operations between any two elements. Because the group elements are fixed to be the bit strings, what is needed to define the group structure is to specify the group operation * between two bit strings. Restricting the length of time series to 2^n and thereby the group G of the form (<ref>) has an advantage; it enables us to implement the group operation in binary circuits (either composed by classical or quantum bits) easier. First of all, we are able to allocate N qubits to encode the group elements in a one-to-one correspondence. Secondly, group actions on the group elements can be easily implemented by bit-wise XOR or AND, or their combinations, with the assistance of ancillas. In fact, bit-wise XOR is equivalent to bit-wise addition modulo 2, which will be denoted by “⊕". For example, bit-wise addition modulo 2 gives 11 ⊕ 01 = 10, while in contrast, the addition in binary numbers modulo 4 gives 11 + 01 =00, which is the binary equivalence of 3+1=0 4. We will denote the standard addition in binary number systems as “+”. Note that the standard addition “+" modulo 2^m reduces to bit-wise addition “⊕" when m=1. Using the bit-wise addition modulo 2 “⊕" and the addition “+" in binary numbers modulo 2^m, we are able to assign the group structure of the form (<ref>) on the set of binary strings for arbitrary m_1,...,m_q. Note this does not mean that all the 2^n! ways of specifying the group structure can be constructed in this way. For example, if the time takes values of i=0,1,2,3 and we want to assign _4 by defining the group operation using only the bit-wise addition, addition “+" modulo 4, or any combination of the two, then we must have 00 isomorphic to 0∈_4, because 00 is the only bit string that does not change the second bit string upon operations of the bit-wise addition, the addition “+" modulo 4, or any of their combinations. This implies that those groups whose identity element is not 00 cannot be assigned in this way, e.g. the group G_1 and G_2 shown in Eq.(<ref>). Then, we will use an example to show the canonical way of assigning the group structure by binary operations. For example, if the time takes i=0,1,2,3,...,7, then we can represent the time by a bit string of length 3, say, 000,001,010,...,111. 3 types of group structures can be defined, i.e. _2 ×_2 ×_2, _4×_2 or _8. One way is to define the group operation as bit-wise addition modulo 2 such that 011 ⊕ 101 =110, and then the group is isomorphic to _2 ×_2 ×_2. A second way to define the addition as the standard addition of binary numbers modulo 8, say, 010+011=101, and then the group of binary strings is isomorphic to _8. A third way is to divide a bit string into two parts, where the first part consists of 2 bits and the second part consists of only 1 bit. Given two bit strings abc and xyz, define the group operation * as abc * xyz = Append[ab + xy 4 , c⊕ z] , where “Append" is the string operation that appends the second string to the end of the first string. In this way, the group is isomorphic to _4 ×_2, where we divide the bits into two parts and the first part is added using + modulo 4 and the second part is added using bit-wise addition ⊕. More generally, a group G=({0,1}^N,*) of the form (<ref>) can be constructed by dividing the bit strings into q parts, and the group operation * is defined by first adding the q parts using the standard addition “+" modulo 2^m_q respectively, and then glueing the resulting p pieces of bit strings together. This definition of group operation is the canonical way of assigning the group structure to the bit strings but it is not the only way. In fact, the way to specify a group G in the form of (<ref>) to the bit strings of length n is not unique, and this is discussed in Sec.<ref>. Having defined the group G=({0,1}^n,*), we can regard the generating function f: {0,1,...,2^n}↦{0,1}^m as a group function, mapping from the group G to bit strings of length m. Here we identify the time i to its binary representation i_1i_2...i_n = τ(i) and thus the group elements i_1i_2...i_n. In this way, the generating function can also be written as f:{0,1}^n →{0,1}^m. In the following sections, we restrict our discussion to the groups in the form of (<ref>), unless otherwise stated. § SEARCHING OVER POSSIBLE ISOMORPHISMS. In constructing variational quantum circuits for HSP, not only the types of group _2^m_1× ...×_2^m_q, but also different isomorphisms of the same group {G|G≅_2^m_1× ...×_2^m_q} should be searched. The reason that we should also search over possible isomorphism is manifested in the following example where we will see that there is a hidden subgroup with respect to group G' but not G, although both of them are isomorphic to _8. Failure of hidden subgroup compression with respect to a fixed group. Consider the time series data {0,0,0,0,1,1,1,1}, which seems to have no hidden subgroup at all. If we are given this sequence and we presume the group is G that is isomorphic to _8 through the canonical isomorphism i = τ(i_1i_2i_3) for i ∈_8 and i_1i_2i_3 ∈ G, then the algorithm fails because the time series does not have any hidden subgroup at all. This can be checked by listing the group G=({0,1}^3,*) and time series data as follows, G = { 000, 001, 010, 011, 100, 101, 110, 111 } . _8 = { 0, 1, 2, 3, 4, 5, 6, 7 } . {f(i)} = {0, 0, 0, 0, 1, 1, 1, 1 } . Here the first row is the bit strings as group elements in G ordered in the increasing time, and the second row is the group elements _8 that is one-to-one corresponding (isomorphic) to the elements in G through the map τ, and finally, the third row is the time series data. It is then clear that the repeating elements correspond to {0,1,2,3}⊂_8, which is not a subgrou, because the only non-trivial subgroups of _8 is either {0,4} or {0,2,4,6}. This suggests that the permuted time series {0,0,0,0,1,1,1,1} cannot be compressed by a hidden subgroup algorithm, regardless of the fact that there is an obvious repeating pattern in the data. Does it mean that this time series data can never be formulated as an HSP? The answer is NO. We will see that it has a hidden subgroup structure with respect to a slightly different group G' that is isomorphic to G. Restoring the hidden subgroup structure by an isomorphism. Now, we will show that the compression of the permuted time series {0,0,0,0,1,1,1,1} can in fact be formulated as an HSP, with only a minor modification to the way of assigning group structure _8 to the binary strings {0,1}^3. To see this, let us first review how we assigned the group _8 to the binary strings. In our earlier example, the bit string i_1i_2i_3 is identified to the group elements i∈_8 via the map τ, i = τ (i_1 i_2 i_3) = i_1 2^2 + i_2 2^1 + i_3 . For two bit strings τ^-1(i) and τ^-1(j), the group operation * is defined as τ^-1(i)*τ^-1(j) =τ^-1( i + j 8 ) . This is the canonical way of assigning the group structure _8 to the binary strings, but it is not the only way. Changing the identification map τ will give us a different specification of the group operation, resulting in a group G' that is isomorphic to _8 and therefore to G. For example, consider the map τ' such that i = τ'(i_1 i_2 i_3) = i_3 2^2 +i_2 2 + i_1 , where we swapped the role of the first bit i_1 and the third bit i_3, compared to the definition of τ. Now we specify the group operation *' as τ'^-1(i) *' τ'^-1(j) = τ'^-1(i+j 8) . We emphasise here that no matter how the identification map τ' is changed, it is used to define the group operation, and the way to identify the binary strings i_1i_2i_3 to the set of indices i remains the same; we always have time i=τ(i_1i_2i_3) and thus the time series data remains unaltered. Now, the group G'=({0,1}^3,*') and the time series data is listed as G' = { 000, 001, 010, 011, 100, 101, 110, 111 } . _8 = { 0, 4, 2, 6, 1, 5, 3, 7 } . {f(i)} = {0, 0, 0, 0, 1, 1, 1, 1 } . Then the repeating elements correspond to {0,2,4,6}, which forms a subgroup of _8. This means that the function f now has a hidden subgroup and can be compressed using our algorithm, with the only caveat being that the Fourier transform now should be over G' instead of G. This example suggests that knowing the group type is not enough; we should also search over different assignments of the same type of group to the bit strings. Formulating these together, we have the following definition. § INTRODUCTION TO QUANTUM FOURIER TRANSFORMATION AND ITS APPLICATIONS IN HIDDEN SUBGROUP PROBLEMS. §.§ Period finding and compression via quantum Fourier sampling. It was shown that the quantum Fourier sampling can be used to solve the Abelian hidden subgroup efficiently <cit.>. Here we retrieve several key results, and a more detailed introduction using group representation theory can be found in Appendix <ref>. Interested readers can also refer to <cit.> for a comprehensive review of the development of HSP. We begin with a simple problem and then present the general results. For example, consider the group _8 assigned in the canonical way by identifying the group element i∈_8 to the index i. Suppose the time series data of length 8 is {0,1,0,1,0,1,0,1}, then the period of the generating function f is 2 and the hidden subgroup is H= {0,2,4,6}, the quotient group is G/H = {0,1} and we can compress the time series data to {f(0)=0, f(1)=1}. More generally, impose the group structure G= _N to the time series i= 0, 1,2,...,N -1 with N=2^n and let the generating function f has a period r, meaning that f(i)= f(i +r) , for any time i and i+r N. Furthermore, we assume that the period r divides N, such that the generating function f has a subgroup H = ⟨ r⟩ = {0,r, 2r,... (M-1) r} where ⟨ r⟩ means the group generated by r and M = N/r =|H|. And the cosets are denoted as c_1 + H, c_2 + H, ..., c_r + H, where c_1=0, c_2=1,...,c_r=r-1 are representative elements in different cosets that form the quotient group G/H, G/H = {0, 1, 2,..., r-1} . Exploiting the coset decomposition, we have ∑_i∈ G|i⟩|f(i)⟩ = ∑_c ∈ G/H∑_h ∈ H|c + h⟩|f(i+h)⟩ . Our goal is to find this period r and the time series data can therefore be compressed to {f(0), f(1),...,f(r-1)}. In general, this period r can be found by exploiting the quantum Fourier transform. The quantum Fourier transform for a cyclic group _N is a unitary transformation QFT__N = 1/√(N)∑_i,j ∈ Gχ_j(i) |j⟩⟨i| . where χ_j: _N →ℂ^* is a group homomorphism to the multiplicative group of non-zero complex numbers ℂ^* that is labeled by j and takes the form of χ_j (i) = exp(2 π j i/N) . In particular, the QFT transforms |0⟩ to 1/√(N)∑_j |j⟩, which is an equal superposition of all bit strings |j⟩. This particular transformation can also be achieved by applying the Hadamard gate on each qubit. Then the quantum algorithm of finding the period r was first invented by Shor <cit.> and we retrieve it as follows. First we prepare two registers |0⟩|0⟩, each of which consists n qubits. Apply the QFT on the first register and we have |0⟩|0⟩1/√(|G|)∑_i∈ G|i⟩|0⟩ = 1/√(N)∑_i=0^N-1|i⟩|0⟩ . Then apply the quantum oracle U_f: |i⟩|j⟩↦|i⟩|f(i)+j⟩ and we have 1/√(N)∑_i=0^N-1|i⟩|f(i)⟩ =1/√(N)∑_c∈ G/H∑_h∈ H|c +h⟩|f(c)⟩ , where the equality is due to the coset decomposition of the group G shown in Eq.(<ref>). Upon measuring the second register with output f(c_0), the post-measurement state of the first register is 1/√(|H|)∑_h ∈ H|c_0 + h⟩ = 1/√(M)∑_s=0^M-1|c_0 + s r⟩ , where the equality is due to Eq.(<ref>). Apply the quantum Fourier transform again, and we have the state of the first register |ϕ⟩≡1/√(MN)∑_j=0^N-1 e^2 π c j/N∑_s=0^M-1 e^2 π s j/M|j⟩, where we have used the relation N = M r. Because the geometric series ∑_s=0^M-1exp(2 π s j/M) is non-zero only if M divides j, then we have |ϕ⟩ = 1/√(r)∑_t = 0^r-1exp(2 π c t M/N)|t M⟩ . Denote H^⊥ ≡{j∈ G| χ_j (h) =1 for all h ∈ H } = {0, M, ..., (r-1) M} , as the orthogonal subgroup of H, we have |ϕ⟩ = 1/√(H^⊥)∑_j ∈ H^⊥χ_j (c_0 ) |j⟩ . Now, measuring the first register in the computational basis and repeat the above process many times we obtain samples from the set H^⊥={0, M, ...., (r-1) M}. Finally, we can use the classical algorithm to find the greatest common divisor of the sampled data, which gives the value of M from with high probability. In fact, it cost only O(n^2) steps for the Euclidean algorithm to find the value of M, where n= log N is the number of bits in N. §.§ Simon's problem and compression via general quantum Fourier transform.  If the group structure of the time series is not _2^n, we should use a more general quantum Fourier transform to find the hidden subgroup. Take Simon's problem <cit.> as an example, where the group structure is G=_2^n=_2 × ...×_2. As discussed in Sec.<ref>, this group G can be assigned to the time series using bit-wise addition modulo 2 “⊕", such that G=({0,1}^n,⊕). In Simon's problem, the generating function f:{0,1}^n→{0,1}^m is promised to have the following symmetry f(i) = f(i ⊕ s), ∀ i ∈{0,1}^n , where s is a fixed bit string to be found. Because s ⊕ s = 0, the hidden subgroup becomes H = {0,s}. And Simon's problem is equivalent to finding the hidden subgroup H. To solve Simon's problem, one way is to check all the bit-strings, which is computationally intractable. Another way is to use a quantum algorithm, which is exponentially faster. The quantum algorithm is based on the Fourier transform of the group _2^n instead of the cyclic group _2^n. In fact, it is proposed by (check later) that the quantum Fourier transform for a general finite Abelian group is as follows. Let G≅_N_1×_N_2× ... ×_N_q, then elements i ∈ G can be identified with a tuple i = (i_(1), i_(2),...,i_(q)), with each of its entry i_(q)∈_N_q. Then the quantum Fourier transform for group G is defined to be = 1/√(|G|)∑_i, j ∈ Gχ_j (i) |j⟩⟨i| , where χ_j: G→ℂ^* is a group homomorphism to the multiplicative group of non-Zero complex numbers ℂ^* that is labeled by j and takes the form of χ_j (i) = e^2 π j_(1) i_(1)/N_1 e^2 π j_(2) i_(2)/N_2 ... e^2 π j_(q) i_(q)/N_q . Here χ_j is known as the character of the group G. For a more detailed discussion about the character and the group representation theory, please refer to Appendix<ref>. For Simon's problem, the group G≅_2^n and for each bit-string i∈ G, i_(q) is the q-th bits i_q, so we have χ_j(i) = (-1)^j_1 i_1 (-1)^j_2 i_2 ...(-1)^j_n i_n≡ (-1)^j · i , where j_t and i_t are the t-th bits of the bit-string j and i respectively, and j · i = j_1 i_1 + j_2 i_2 +...+j_n i_n. Then the quantum Fourier transform for the Simon's problem is __2^n = 1/√(2^n)∑_j, i∈{0,1}^n (-1)^j · i|j⟩⟨i| . Similar to the algorithm for period finding, we prepare two registers in state |0⟩, each of which consists n qubits. Then we apply the quantum Fourier transform in the first register to generate a equal superposition of all the bit-strings, |0⟩|0⟩1/√(G)∑_i ∈ G|i⟩|0⟩ . Next, we apply the quantum oracle U_f to generate a equal superposition of all values of the function f, 1/√(G)∑_i ∈ G|i⟩|f(i)⟩ = 1/√(|G|)∑_c∈ G/H (|c⟩ + |c ⊕ s⟩) |f(c)⟩ . Now, we measuring the 2nd register and conditioning on the output f(c_0), we have the post-measurement state of the first register as 1/√(|2|) (|c_0⟩ + |c_0⊗ s⟩) ≡|c_0 + H⟩ , where |c_0+H⟩ = 1/√(|H|)∑_h∈ H|c_0+h⟩. Then apply the QFT again, and we have |ϕ⟩≡1/√(|G|)∑_j ∈ G(-1)^j · c_0/√(2)[1+(-1)^j· s] |j⟩ . We define the orthogonal group of H as H^⊥≡{j| (-1)^j· s = 1}= {j|χ_j(i) = 1 ∀ i ∈ H } , which contains exactly the elements that have non-zero amplitudes in |ϕ⟩. This means, |ϕ⟩ = 1/√(|H^⊥|)∑_j ∈ H^⊥ (-1)^j · c_0|j⟩ = 1/√(|H^⊥|)∑_j ∈ H^⊥χ_j(c_0) |j⟩ , Measure the first register in the computational basis and we get a bit-string j ∈ H^⊥. Repeat the above process for k times and we get a set of random bit-strings {j^(1),j^(2),...,j^(k)} drawn uniformly from H^⊥. Now we should solve s using this sample of bit-strings in H^⊥. Because 1+(-1)^j· s is non-zero if and only if j · s = 1 2, thus solving the linear congruence equations j^(1)_1 s_1 + j_2^(1) s_2 + ⋯ + j_n^(1) s_n = 0 2 j^(2)_1 s_1 + j_2^(2) s_2 + ⋯ + j_n^(2) s_n = 0 2 ⋮ j^(k)_1 s_1 + j_2^(k) s_2 + ⋯ + j_n^(k) s_n = 0 2 and we can get the string s by obtaining each of its digits s_t, resulting the solution of the Simon's problem. More generally, Ref. <cit.> gives a detailed introduction on how to construct the linear congruence equations for all kinds of finite Abelian groups. §.§ Efficient circuit implementation of quantum Fourier transform. In this subsection, we review the results of implementing the general quantum Fourier transform in the circuit level. Note that it is direct to verify that __2^m_1× ... ×_2^m_q = __2^m_1⊗ ...⊗__2^m_q . So we only need to consider the implementation of Fourier transform over cyclic groups. In fact, it is shown <cit.> that the quantum Fourier transform __AB over cyclic group _AB with co-prime A,B can be implemented by __A and __B. This means that it is sufficient to consider the circuit construction of the quantum Fourier transform over _2^n and over _N with N being an odd number. Since we only consider the groups whose cardinality is 2^n, we will retrieve the circuit construction for __2^n as follows, and detailed discussions for QFT over _N with odd number N can be found in <cit.>. It can be proved <cit.> that __2^n|j_1 j_2 ...j_n⟩ = 1/√(2^n)( |0⟩ + e^2π 0.j_n|1⟩) ... ( |0⟩ + e^2π 0.j_1...j_n|1⟩) , where 0.j_1 j_2 ...j_n is the short hand notation of j_1 2^-1 + j_2 2^-2 +...+j_n2^-n. Now, we construct the quantum circuit that generates the phase e^2π 0.j_1j_2...j_n. Let H_(a) =1/√(2)[ 1 1; 1 -1 ] be the Hadamard gate applied to the a-th qubit (corresponding to j_a in the bit string |j_1j_2...j_n⟩) and let R_k = [ 1 0; 0 exp(2 π/2^k ) ] be a phase rotation operator. Now, denote R_k^a→ b be the two-qubit operation of applying the phase rotation R_k on the b-th qubit controlled by the a-th qubit, i.e. R_k is applied to qubit b when qubit a is in state |1⟩ and nothing is applied when qubit a is in state |0⟩. Then, it is direct to verify that R_n^n → 1...R_3^3→ 1 R_2^2→ 1H_(1)|j_1...j_n⟩ = 1/√(2)(|0⟩ + e^2 π 0.j_1 j_2 ...j_n|1⟩) ⊗|j_2 j_3 ...j_n⟩ . and similarly R_n-1^n → 2...R_3^4→ 2 R_2^3→ 2 H_(2)|j_1 j_2...j_n⟩ = |j_1⟩⊗1/√(2)(|0⟩ + e^2 π 0.j_2 ...j_n|1⟩) ⊗|j_3 ...j_n⟩ . Now, consecutively apply R_n-k+1^n-k+1 → k...R_2^k+1 → k H_(k) for k=1,...,n, and we have →1/√(2^n)( |0⟩ + e^2π 0.j_1...j_n|1⟩) ... ( |0⟩ + e^2π 0.j_n|1⟩) , which is similar to the desired quantum Fourier transformation shown in Eq.(<ref>) but with reversed qubits order. Finally, we complete the quantum Fourier transform by reversing the qubits order, using approximately ⌊ n/2⌋ SWAP operations. The total number of one-qubit and two-qubit operations used to complete the quantum Fourier transform, is O(n^2) = O(log^2 N), which is very efficient. As an example, the quantum circuit for the Fourier transform over _8 is @C=.3em @R=0em @!R i_1 2__8 H R_2 R_3 i_2 __8 .3em0em=.3em0em -1 H R_2 i_3 __8 -2 -1 H . § INTRODUCTION TO HIDDEN SUBGROUP PROBLEM AND REPRESENTATION THEORY OF ABELIAN GROUPS. The Fourier sampling method is used to solve the HSP over Abelian groups while it does not work for non-Abelian groups in general <cit.>. However, there are certain special cases of non-Abelian groups for which Fourier sampling can be applied effectively to solve HSP over non-Abelian groups. For example, Fourier sampling could work for non-Abelian HSP when the subgroup is normal, or the group G is Dihedral group  <cit.>. What is more, literature <cit.> discusses using Fourier sampling as a subroutine in solving the group that is close to a Abelian group. There is still a potential to extend our variational method to certain types of non-Abelian groups. In this propose, here we give a more formal and general discussion on the definition of the general HSP and the general form of Fourier sampling. In general, an HSP is about finding the hidden structure in a group G. The hidden structure is depended on the function f by following definition. Given a group G, set X, and a group function f: G→ X, a subgroup H is called a hidden subgroup if it satisfies f(g_1) = f(g_2) if and only if g_1H = g_2H, where the g_iH are cosets of H. The hidden subgroup problem is to determine H given access to the function f as a black-box. The definition here is a promotion of Abelian HSP. Take the period finding problem as an example, the group G has the elements of ℤ, and the group addition is the normal arithmetical +. Thus, the function f is a group function which maps (ℤ, +) to X= ℤ, and f: ℤ→ℤ. The hidden sub-group of interest H={0,r,2r,...} is generated by r, and H={nr}_n=0,1,...∞. The HSP is equivalent to finding r in this example, given black-box access to f. This section is organized into three subsections. Firstly, in section <ref>, we introduce group representation theory, a mathematical tool that proves to be highly useful in solving the HSP. Secondly, in section <ref>, we present a generalized version of Fourier Sampling using the language of group representation theory. Lastly, in section <ref>, we discuss the classical post-processing algorithm for the outcomes of Fourier sampling, also generalized in the language of group representation theory. By combining these three subsections, we generalize HSP to the non-Abelian case, from the problem description to its treatment in the Abelian case. This provides a framework to explore how our scheme can be extended to the non-Abelian case, as well as to evaluate its performance in this context. §.§ Mathematical Preliminary: Group representations and characters The quantum period finding algorithm <cit.> finds this hidden subgroup, which means it solves an HSP. Other HSPs have been solved using similar quantum techniques over finite Abelian groups, while solving HSP over non-Abelian problems still remains an open question. This paper will mainly focus on the finite Abelian case. The standard method for solving finite Abelian HSP is called quantum Fourier sampling. Before we begin describing this Fourier sampling, certain necessary representation theory preliminaries will be introduced. A group homomorphism ρ : G → GL(V) is called the group representation of group G, where GL(V) means the general linear group of invertible matrices acting on vector space V with matrix multiplication as the group operation. Let (ρ, V) be the representation of G, and consider a subspace U⊂ V. U is called a G-invariant subspace of V if for all elements g∈ G and all vectors v∈ V, ρ(g)v∈ U . Let (ρ, V) be a representation of group G. ρ is defined as an irreducible representation (irrep) , if G-invariant subspaces of ρ are only either V or {0}. For an Abelian group, the representation space of all the non-equivalent irreducible representations are one dimensional, i.e.the invariant vector space has only one basis vector. The irreps can then be labelled by a group element ρ_h, where h∈ G. The following process shows how this is done. The finite Abelian group G is isomorphic to the direct sum of a series finite cyclic group ℤ_p_i, G≃ℤ_p_1×ℤ_p_2×⋯×ℤ_p_s. The irreps of cyclic group ℤ_p are ρ_t(q) = ω_p^qt, where ω_p = e^2π i/p, and t is an integer. Let the group element g∈ G be g = a_1^q_1 a_2^q_2⋯ a_s^q_s, where ⟨ a_i ⟩≃ℤ_p_i, and t_i∈ℤ, ⟨ a_i ⟩ is the group generated by a_i. Similarly, denote h∈ G as h = a_1^t_1 a_2^t_2⋯ a_s^t_s. Then, all non-equivalent irreps of finite Abelian groups are ρ_h(g) := ω_p^hg:= ω_p_1^t_1q_1ω_p_2^t_2q_2⋯ω_p_s^t_sq_s . Equation <ref> gives the precise expression of all non-equivalent irreps over finite Abelian groups. These irreps form a set of basis functions because they are orthogonal to each other<cit.>. The inner product of two group functions f_1, f_2 : G →ℂ is defined by ⟨ f_1, f_2 ⟩_G = 1/|G|∑_g∈ Gf_1(g) f_2^*(g), where the f^*(g) denotes the complex conjugate of f(g). Considering irreps ρ_h(.) as such functions, we have the following theorem. The irreps of finite Abelian groups are orthogonal: ⟨ρ_h, ρ_h'⟩_G = δ_h,h' . The details of the theorem and the proofs can be found e.g. in Ref. <cit.>. Due to the orthogonality relation of Abelian representations, the representations ρ_h have the potential to be the basis functions of all group functions f: G→ X. In particular, a function f could be expanded in terms of representations <cit.> as f(g) = ∑_h∈ G⟨ f,ρ_h ⟩_Gρ_h(g) . The Fourier transform is a basis transform that aims to get the coefficients of basis functions. It is generally used in ..... For the functions over Abelian groups, the basis functions may be ρ_h(g). Thus, the Fourier Transform over an Abelian group could be defined from this expansion. The Fourier transform of function f:G→ X over finite Abelian group is defined as f̂(h) := ⟨ f,ρ_h⟩_G = 1/√(|G|)∑_g∈ G f(g) ρ_h(g) . Equation <ref> leads to the quantum Fourier transform(QFT) over finite Abelian groups. However, it is still some distance away from the familiar FT. The next few paragraphs will show that the FT given by the Eq. (<ref>) is consistent with the QFT we are familiar with. QFT|i⟩ = 1/|G|∑_j ρ_j(i)|j⟩ . Denoting the computational basis as |i⟩ = ∑_j δ_i(j)v⃗_⃗j⃗, where δ_i(j) is the delta function, and {v⃗_⃗j⃗} is a set of basis. The Fourier transform of |i⟩ is FT|i⟩ = FT∑_j = 0^n-1δ_i(j)v⃗_⃗j⃗ = ∑_j = 0^n-1δ̂_i(j) v⃗_⃗j⃗ = 1/√(|G|)∑_j∑_kδ_i(k) ρ_j(k)v⃗_⃗j⃗ = 1/√(|G|)∑_jρ_j(i)v⃗_⃗j⃗ = 1/√(|G|)∑_jρ_j(i) |j⟩. Eq. (<ref>) aims to explain why quantum FT obtains the form in Eq. (<ref>). By considering a state |ψ⟩ as a discrete function ϕ: →ℂ, a FT acts on state |ψ⟩ could be regarded as acts on the function ϕ, where ϕ gives the coefficient of the state in the computational basis. §.§ Fourier sampling This shows the matrix form of the FT, which also defines the quantum (unitary) Fourier transform (QFT) <ref>. With the definition of general QFT over finite Abelian group G, we can now describe the Fourier sampling(FS)<cit.> algorithm that solves the HSP over G. The process of FS is described in Algorithm <ref>, and shown in Fig.<ref>. Firstly, use Hadamard gates to get the superposition state ∑_x∈ G|x,0⟩. Secondly, apply the oracle U_f which satisfies U_f|x,0⟩ = |x,f(x)⟩ to get the quantum data table ∑_x|x,f(x)⟩. Finally, read out the second quantum register and do QFT on the first register. Due to the property of hidden subgroup conserved function f, which is specifically expressed as f(g_1) = f(g_2) iff g_1H= g_2H. Thus, g_1,g_2∈ cH, where c is a group element that label the coset cH = g_1H = g_2H. Therefore, the state in the first register is 1/√(|H|)∑_h∈ H|ch⟩. These results are the label of irreps of group G, which includes the information of hidden subgroup. The whole Fourier sampling process is described in Alg. <ref>. The state 1/√(|H|)∑_h∈ H|ch⟩ actually contains the information about H, but we can not directly extract the information by measuring it. After measuring the second register of ∑_x|x,f(x)⟩, it will randomly choose a c and get state 1/√(|H|)∑_h∈ H|ch⟩. Then, if we directly measure the first register, it will randomly collapse to |ch⟩. Combining the 2 processes, the probability to get any state |g⟩ is uniformly Prob(|g⟩) = 1/|G|. Thus, directly sampling the second quantum register is useless. However, here is a way to remove its influence. The way to remove the influence is by removing the influence of c, which is called the shifting in Jozsa's analysis <cit.>. As it is mentioned in Equation <ref>, the QFT of this state is QFT1/√(|H|)|ch⟩ = 1/√(|G||H|)∑_h,g∈ Gρ_g (ch)|g⟩. Because the representations can be reformed into an unitary matrix<cit.>, the norm of a matrix is the same as the norm of the matrix multiply a representation unitary. This fact gives us the technique to calculate the probability of state |g⟩, | ∑_h ρ_g(ch) |^2 = | ρ_g(c)∑_h ρ_g(h) |^2 = | ∑_h ρ_g(h) |^2 . After applying QFT, the information of H can be extracted by measuring the first register. The measurement yields a |g⟩. Then Equation <ref> is used to compute the representation ρ_g. This process samples a representation that satisfies H⊂(ρ), where (ρ) is the kernel of representation ρ, (ρ) := {k | ρ(k) = 1} if ρ is a representation for an Abelian group. Here is the reason. The probability distribution of sampling representation is decided by the amplitude |∑_hρ_g(ch)|^2. ∑_hρ_g(ch) = ∑_texp(2π i∑_j (q_jt_j/p_j)), where t is the parameter that depicts h = ∏ a_i^t_i. Because H<G, H = ⟨ a_i^r_i⟩, Equation <ref> is non-zero only when q_jr_j | p_j. In other words, the representation that could be sampled by Algorithm <ref> satisfy ρ_g(h) = 1, which means H⊂(ρ). Looping m times Algorithm <ref>, m representations could be sampled. The set of sampled representations gives the hidden subgroup H via a classical algorithm. §.§ Classical algorithm There is also classical post-processing. A set of sampled representations reflect a lot of information about the hidden subgroup. The following algorithm <cit.> requires only m = 4log_2|G| loops to approximately get the hidden subgroup H within probability 1-2exp(-log_2|G|/8). The idea of the classical post-processing algorithm is to utilize the common kernel of sampled irreps. A subgroup that contains hidden subgroup H is got by calculating the kernel of the irreps sampled by Alg. <ref>, because they satisfy H⊂(ρ). By continuously intersecting those subgroups, the intersection will exponentially converge to the common part in those subgroups, which is exactly what we want, the hidden subgroup H. The whole algorithm is shown in Alg. <ref>. The computational complexity of FS is mainly caused by the QFT over a group, while the QFT over Abelian groups have been efficiently constructed. Kiteav proposed a model to construct QFT over ℤ_p for any p ∈ℤ^+<cit.>, and Hales and Hallgren improved it in 2000<cit.>. The most famous classical algorithm for computing the Fourier transform is the fast Fourier transform. The fast Fourier change is based on the decomposition of a chain of invariant subgroups for the group structure. In fact, there are several more general approaches to the classical algorithm for the Fourier transform on a finite group, which are based on subgroup tower<cit.>. The subgroup tower is a subgroup chain G = H_t>H_t-1>⋯ > H_0 = {0}, where H_i+1>H_i means H_i is a subgroup of H_i+1. For example, when G = ℤ_2^n, H_t ≃ℤ_2^t. From Eq. (<ref>), we can see that the classical approach to calculate FT is based on the following recursion. f̂(j) = 1/√(2^n)∑_k = 0^2^n-1 f(k)ρ_j(k) = 1/√(2) [ ∑_k = 0^2^n-1-1f(2k)ω^2jk_2^n/√(2^n-1) + ω_2^n^j∑_k = 0^2^n-1-1f(2k+1)ω^2jk_2^n/√(2^n-1)] = 1/√(2)[ f̂_1 (j) + ω_2^n^j f̂_2(j)]  . f_1 = f|_ℤ_2^k-1 is the f which limits its domain on subgroup ℤ_2^k-1, so f̂_1 (j) is the FT over subgroup ℤ_2^k-1. Suppose FT over ℤ_2^k needs T(2^n) time to achieve, then Equation <ref> deduce that T(2^n) = 2T(2^n-1)+𝒪(2^n) in classical case, which gives T(2^n) = 𝒪(n2^n). In quantum case, it just needs a unitary to calculate all the f̂(j) in Equation <ref> if we have the FT over subgroup, so T(2^n) = 2T(2^n-1)+𝒪(n). That gives the computation time of QFT over ℤ_2^k as T(2^n) = 𝒪(n^2). More explictly gave the quantum circuit of QFT base on this subgroup tower idea<cit.>, and also promoted it to some special kinds of non-Abelian group. With the results of efficiently QFT over Abelian group, the HSP over Abelian group has been solved.
http://arxiv.org/abs/2306.04585v1
20230605123944
RTAEval: A framework for evaluating runtime assurance logic
[ "Kristina Miller", "Christopher K. Zeitler", "William Shen", "Mahesh Viswanathan", "Sayan Mitra" ]
cs.LO
[ "cs.LO" ]
K. Miller et al. University of Illinois Urbana Champaign, Champaign IL 61820, USA Rational CyPhy, Inc., Urbana IL 62802 RTAEval : A framework for evaluating runtime assurance logic Kristina Miller1 Christopher K. Zeitler 2William Shen 1Mahesh Viswanathan 1 Sayan Mitra 1 Received 21 February, 2023; accepted 5 June, 2023 =============================================================================================== Runtime assurance (RTA) addresses the problem of keeping an autonomous system safe while using an untrusted (or experimental) controller. This can be done via logic that explicitly switches between the untrusted controller and a safety controller, or logic that filters the input provided by the untrusted controller. While several tools implement specific instances of RTAs, there is currently no framework for evaluating different approaches. Given the importance of the RTA problem in building safe autonomous systems, an evalutation tool is needed. In this paper, we present the framework as a low code framework that can be used to quickly evaluate different RTA logics for different types of agents in a variety of scenarios. is designed to quickly create scenarios, run different RTA logics, and collect data that can be used to evaluate and visualize performance. In this paper, we describe different components of and show how it can be used to create and evaluate scenarios involving multiple aircraft models. § INTRODUCTION As autonomous systems are deployed in the real world, their safe operation is becoming critical in a number of domains such as aerospace, manufacturing, and transportation. The need for safety is often at odds with the need to experiment with, and therefore deploy, new untrusted technologies in the public sphere. For example, experimental controllers created using reinforcement learning can provide better performance in simulations and controlled environments, but assuring safety in real world circumstances is currently beyond our capabilities for such controllers. Runtime assurance (RTA) <cit.> addresses this tension. The idea is to introduce a decision module that somehow chooses between a well-tested Safety controller and the experimental, Untrusted controller, assuring safety of the overall system while also allowing experimentation with the new untrusted technology where and when possible. Specific RTA technologies are being researched and tested for aircraft engine control <cit.>, air-traffic management <cit.>, and satellite rendevous and proximity operations <cit.>. The Simplex architecture <cit.> first proposed this idea in a form that is recognizable as RTA. Since then, the central problem of designing a decision module that chooses between the different controllers has been addressed in a number of works such as SimplexGen <cit.>, Black-Box Simplex <cit.>, and SOTER <cit.>. The two main approaches for building the decision module are based on (a) an RTASwitch which chooses one of the controllers using the current state or (b) an RTAFilter which blends the outputs from the two controllers to create the final output. In creating an RTASwitch, the decision can be based on forward-simulation of the current state <cit.>, model-based <cit.> and model-free forward reachability <cit.>, or model-based backward reachability <cit.>. The most common filtering method is Active Set Invariance Filtering (ASIF) <cit.>, wherein a control barrier function is used to blend the control inputs from the safety and untrusted controllers such that the system remains safe with respect to the control barrier functions <cit.>. While these design methods for the decision module have evolved quickly, a software framework for evaluating the different techniques has been missing. In this paper, we propose such a flexible, low-code framework called (Figure <ref>). The framework consists of a module for defining scenarios, possibly involving multiple agents; a module for executing the defined scenario with suitable RTASwitches and RTAFilters; and a module for collecting and visualizing execution data. RTAEval allows different agent dynamics, decision modules, and metrics to be plugged-in with a few lines of code. In creating RTAEval, we have defined standardized interfaces between the agent simulator, the decision module (RTA), and data collection. In Section <ref>, we give an overview of . In Section <ref>, we discuss how scenarios are defined, and, in Section <ref>, we discuss how the user should provide decision modules (also called the RTALogic). In Section <ref>, we discuss data collection, evaluation, and visualization. Finally, in Section <ref>, we show a variety of examples implemented in . A tool suite for this framework can be found at <https://github.com/RationalCyPhy/RTAEval>. § OVERVIEW OF THE FRAMEWORK The three main components of are (a) the scenario definition, (b) the scenario execution, and (c) the data collection, evaluation, and visualization module (See Figure <ref>). A scenario is defined by the agent and its low-level controller, the unsafe sets, the untrusted and safety controllers, the time horizon for analysis, and the initial conditions. Given this scenario definition, the scenario is executed iteratively over the specified time horizon. During each iteration of the closed-loop execution of the RTA-enabled autonomous system, the current state of the agent and the sets of unsafe states are collected. This observed state information is given to both the untrusted and safety controller, which each compute control commands. Both of these commands are evaluated by the user-provided decision module (i.e., RTA logic), which computes and returns the actual command to be used by the agent. The agent then updates its state, and the computation moves to the next iteration. While the execution proceeds, data – such as the RTA computational performance, controller commands, agent states, and observed state information of the unsafe sets – is collected via the data collection module. At the end of an execution, this data is evaluated to summarize the overall performance of the RTA. This summary includes computation time of the RTA logic, untrusted versus safety controller usage, and the agent's distance from the unsafe set. We also provide a visualization of this data. A low-code tool suite of the framework is written in Python, and it is flexible in that it allows for a wide variety of simulators and can be generalized to scenarios where multiple agents are running a variety of different RTA modules. Simple Python implementations of vehicle models (some of which we provide in ) can be incorporated directly. However, users can incorporate new agent models within as long as the agent has a function that defines the dynamics and low-level controller of the agent and returns the state of the agent at the next time step. An example of this is provided in Example <ref> and Figure <ref>. The safety and untrusted controllers should also be encoded in , which simply takes in the command (or mode) to be used over the next time step. Higher fidelity simulators such as CARLA <cit.> and AirSim <cit.> can also be used in place of for the execution block. The observed state information would need to be provided to our data collection, evaluation, and visualization tool in the format seen in Figure <ref>. §.§ Scenario definition and execution A scenario is defined by the agent, unsafe sets, safety and untrusted controllers, initial conditions, and time horizon T > 0. The simulation state at time t ∈ [0,T] consists of the agent state, the unsafe set definition, and the control command at time t. The agent has an identifier, a state, and some function that takes in some control command at time t and outputs the system state at time t+1. The unsafe sets are the set of states that the system must avoid over the execution of the scenario. We say that the agent is safe if it is outside the unsafe set. The safety and untrusted controllers compute control commands for the system, which are then filtered through the RTA logic, as discussed further in Section <ref>. The initial conditions define the simulation state at time 0. Then, given a scenario with some time horizon T and an RTA logic, an execution of the scenario is a sequence of time-stamped simulation states over [0,T]. Note that, while we define an execution as a discrete time sequence of simulation states, the actual or real-world execution of the scenario may be in continuous time; thus, we simply sample the simulation states at a predefined interval. We call the part of the execution that contains only the sequence of agent states the agent state trace. Similarly, we call the part of the execution that only contains the sequence control commands the mode trace and the part that only contains the sequence of unsafe set states the unsafe set state trace. [14]r0.55 < g r a p h i c s > Execution structure required by the evaluation and visualization In order for our evaluation and visualization to work, the execution must be given to the data collection as a dictionary, the structure of which is shown in Figure <ref>. Here, there are three levels of dictionaries. The highest level dictionary has the keys and , which point to dictionaries containing the state and mode traces of the agents and state traces of the unsafe sets respectively. The second level of dictionaries has keys that correspond to different agents and unsafe sets. We call these keys the agent and unsafe set IDs. Each agent ID points to a dictionary containing the state and mode traces of that agent. The state trace is a list of time-stamped agent states, and the mode trace is a sequential list of control commands. Each unsafe set ID points to a dictionary containing the set type and state trace of that unsafe set. The set type is a string that tells what type of set that particular unsafe set is. Currently, supports the following set types: point, ball, hyperrectangle, and polytope. Each set has a definition that, together with the type, defines the set of states contained within the unsafe set. Then, the state trace for an unsafe set is a sequence of time stamped definitions of the set. Consider the following adaptive cruise control (ACC) scenario shown in Figure <ref> as a running example: An agent with state x = [p, v]^⊤ has dynamics given by f(x, m) = [ 0 1; 0 0 ] x + [ 0; 1 ] a(x, m), where a(x, m) = g_(x) if m = and a(x, m) = g_(x) if m =. The agent tries to follow at distance d > 0 behind a leader moving at constant speed . The position of the leader at time t is given by p_(t). Then, the untrusted controller g_ and safety controller g_ are given by g_(x) = k_1 ((p_L(t) - d) - p) + k_2 ( - v) and g_(x) = a_max (p_L(t) - p) > d -a_max else, where k_1 > 0 and k_2 > 0. The function is a composition of the untrusted controller, the safety controller, and the dynamics function of the system. A collision between the agent and leader occurs if p_(t) - p(t)≤ c, c < d. There is then an unsafe set centered on the leader agent, and it is defined by = { [p, v, t]^⊤∈× | p_(t) - p ≤ c }. The function then takes in the current state of the simulator and creates the unsafe set centered on the leader. The initial conditions for this scenario are then the initial agent state x_0, the initial leader state x_0, and the time horizon T > 0. [10]l0.5 < g r a p h i c s > Example visualization of the scenario defined in Example <ref>. The leader is shown in black, the follower is shown in orange, and the unsafe region is shown in red. This scenario is shown in our low code framework in Figures <ref> and <ref>. The dynamics of the agent are defined in in lines 11-26 of Figure <ref>. The proportional controller is defined in lines 1-4 and the bang-bang controller is defined in lines 6-9. This is all contained within a class . In Figure <ref>, we set up the scenario. In lines 2-5, we define the goal point for the agent. In lines 7-15, we create the agent, the leader, and the unsafe set. Finally, in line 18, we initialize the scenario to be executed; in lines 21-26, we add the agents and unsafe sets to the scenario; and in lines 29-30, we set up the scenario parameters. §.§ RTA logics We provide an RTA base class that can be used in . The user must provide the RTA logic to be evaluated. This logic takes in an observed state and outputs the control command to be used by the plant. This observed state information has to be provided in the format shown in Figure <ref> for data collection, evaluation, and visualization to work. The RTA base class is shown in Figure <ref>. We provide the functions and . Users must provide the switching logic as . When creating RTA, the user can decide to use our data collection by running in . This will create a data collection object called , which saves the data used for our evaluation (see Section <ref>). The switch is performed in , which also stores the current perceived state of the simulator from the point of view of the agent, as well as the time to compute the switch. The user provided switching logic takes in the current state of the simulator and returns the mode that the agent should operate in. To create different logics, the user must create an RTA class derived from the RTA base class, which implements the function . An example of this is given in Example <ref>. An example of a simple RTA switching logic can be seen in Figure <ref>. This is a simulation-based switching logic that was designed for the adaptive cruise control introduced in Example <ref>. Here, the future states of the simulator are predicted over some time horizon T and saved as in line 2. We then check over this predicted trajectory to see if the agent ever enters the unsafe set in lines 3-11. If it does, then the safety controller is used, and if it does not, then the untrusted controller is used. Once is created, we add it to a new class called and use it to create an RTA object called for . We can then change line 22 in Figure <ref> to . This will associate with and run the RTA switching logic every time the state of is updated. §.§ Data collection, evaluation, and visualization We now discuss the data collection, evaluation, and visualization tool which is provided as a part of . This tool is a class that has some collection functions and post-processing functions. To use the data collection and evaluation functionalities provided, the user must add the line when creating the RTA object. Data collection occurs via the functions and . Here, collects the simulation traces, and collects the time it takes for the RTA module to compute a switch. An example of how the data collection can be incorporated in the RTA module is shown in Figure <ref>. The traces are collected and stored as a dictionary of the form shown in Figure <ref>. Once the data has been collected over a scenario, we can use them to evaluate the performance of the RTA over a scenario. Examples of the data evaluation, as well as screenshots from our simulator are shown in Section <ref>. A summary of the RTA's performance in the scenario can be quickly given by running . The main metrics that we study are the following: Computation time gives the running time of each time it is invoked. We provide the average, minimum, and maximum times to compute the switch. Distance from unsafe set is the distance between the ego agent and the unsafe sets. We also allow the user to find the distance from other agents in the scenario. Time to collision (TTC) is the time until collision between the ego agent and the other agents if none of them change their current trajectories. Finally, we also provide information on the percent controller usage, which is the proportion of time each controller is used over the course of the scenario. We also provide information on the number of times a switch occurs in a scenario. Example results are shown in Section <ref>. § EXAMPLES In this section, we present some examples using our provided suite of tools for . We evaluate two different decision module logics: and . is the simulation based switching logic introduced in Example <ref>. is similar to but uses reachable sets that contain all possible trajectories of the agent as the basis of the switching logic. We evaluate these RTAs in 1-, 2-, and 3-dimensional scenarios with varying numbers of agents. These scenarios are described in more detail in Table <ref>. Here, the workspace denotes the dimensions of the physical space that the systems live in. Note that, while all the examples presented have some physical representation, this is not a necessary requirement of the tool. We also provide pointers to where the dynamics of the agents can be found, as well as the untrusted and safety controllers used. Visualizations of the scenarios can be seen in Figures <ref> and <ref>. Each of these scenarios is executed using , and the three RTA logics are created for them. Data is collected over the scenario lengths in Table <ref>. Note that the scenario length is the simulation time for the scenario and not the real time needed to run the scenario. We run these scenarios with varying numbers of agents and present the running time of the scenario execution and evaluations in Table <ref>. The simulation time step is set to 0.05 for all scenarios. Here, exec time is the time it takes to run the scenario, RTA comp time is the the average time it takes for the user provided RTA logic to make a decision, % RTA comp is the percentage of the exec time that is taken by the RTA decision module, and eval time is the time it takes to get a full summary of how the RTA performs for each agent. The evaluation summary includes the average decision module computation time, controller usage, distance from the unsafe sets and other agents, and time to collision with the unsafe sets and other agents. We note that a majority of the run time for the scenario execution is due to the RTA logic computation time and not our tool. Additionally, while the run time of the evaluation is affected by the number of agents in the scenario, it is mostly affected by the set type of the unsafe set, where the polytope in the scenario causes the biggest slow down in evaluation time. The summary of an RTA performance is given out in a text file from which visualizations like the one in Figure <ref> can be easily created. In addition to the computation time, distance from the unsafe sets, distance from the other agents, and controller usage, the minimum times to collision (TTC) for the unsafe sets and other agents are also reported. The summary information is saved in such a way that users can pull up snapshots of the scenario at any point in time. This means that the user can examine the state of the scenario that caused an unwanted result. Such functionality aids in the rapid prototyping of RTA technologies and logics. § CONCLUSION We presented the suite of Python-based tools for evaluating different runtime assurance (RTA) logics. Different RTA switching logics can be quickly coded in , and we demonstrate its functionality in rapid prototyping of RTA logics on a variety of examples. can be used in multi-agent scenarios and scenarios with perception models. Interesting next steps might include extending the functionality of to filtering methods such as ASIF and scenarios that involve effects of proximity-based communication. splncs04
http://arxiv.org/abs/2306.01838v2
20230602180030
Existence of length minimizers in homotopy classes of Lipschitz paths in $\mathbb{H}^1$
[ "Daniel Perry" ]
math.MG
[ "math.MG" ]
Existence of length minimizers in homotopy classes of Lipschitz paths in ℍ^1 Daniel Perry July 31, 2023 ============================================================================ We show that for any purely 2-unrectifiable metric space M, for example the Heisenberg group ^1 equipped with the metric, every homotopy class [α] of Lipschitz paths contains a length minimizing representative α that is unique up to reparametrization. The length minimizer α is the core of the homotopy class [α] in the sense that the image of α is a subset of the image of any path contained in [α]. Furthermore, the existence of length minimizers guarantees that only the trivial class in the first Lipschitz homotopy group of M with a base point can be represented by a loop within each neighborhood of the base point. The results detailed here will be used to define and prove properties of a universal Lipschitz path space over ^1 in a subsequent paper. Key words and phrases. Heisenberg group, contact manifolds, unrectifiability, geometric measure theory, sub-Riemannian manifolds, metric trees Mathematical Reviews subject classification. Primary: 53C17, 28A75 ; Secondary: 57K33, 54E35 Acknowledgments. This material is based upon work supported by the National Science Foundation under Grant Number DMS 1641020. The author was also supported by NSF awards 1507704 and 1812055 and ARAF awards in 2022 and 2023. § INTRODUCTION In this paper, we prove the following theorem. Let M be a purely 2-unrectifiable metric space, for example the Heisenberg group ^̋1 endowed with the metric. For any homotopy class [α] of Lipschitz paths in M, there exists a length minimizing Lipschitz path α∈[α] where M(α)=inf{M(α) | α∈[α]}. Moreover, for any representative α∈[α] in the class, (α)⊂(α). A length minimizer α∈[α] can be thus thought of as the core of the homotopy class [α] where the extraneous branches of the paths in the class have been pruned. An immediate consequence is that for every point in a purely 2-unrectifiable metric space, only the trivial class in the first Lipschitz homotopy group can be represented by a loop within each neighborhood of the point (Corollary <ref>). Studying metric spaces, in particular Heisenberg groups endowed with a metric, through Lipschitz homotopies was introduced in <cit.> with the definition Lipschitz homotopy groups. Since, Lipschitz homotopy groups have been calculated for various sub-Riemannian manifolds in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. For an overview of sub-Riemannian geometry, see <cit.>. Results in <cit.>, <cit.>, and <cit.> concerning the Lipschitz homotopy groups of the Heisenberg group ^̋1 and contact 3-manifolds rely heavily on these sub-Riemannian manifolds endowed with the metric being purely 2-unrectifiable metric spaces in the sense of <cit.>. As is shown in <cit.>, the Heisenberg group ^̋1 is purely 2-unrectifiable and, as is shown in <cit.>, any contact 3-manifold endowed with a sub-Riemannian metric is purely 2-unrectifiable. We likewise make significant use of results of purely 2-unrectifiable metric spaces to conclude the existence of length minimizers in homotopy classes. The key step to the proof of Theorem <ref> is, given a Lipschitz path α in a homotopy class [α], determining a sequence of Lipschitz paths that are uniformly Lipschitz homotopic to α and whose lengths converge to the infimum. Once such a sequence is obtained, Arzelà-Ascoli theorem yields a length minimizing path α together with a homotopy to α. To find this sequence of paths, we apply a lemma (Lemma <ref>) which, given any homotopy from α to any Lipschitz path β, constructs a homotopy with controlled Lipschitz constant from α to a shorther path β'. The proof of Lemma <ref> relies on a result of Wenger and Young in <cit.> concerning a factorization of Lipschitz maps with purely 2-unrectifiable target though a metric tree. Their result is stated in Theorem <ref>. We will utilizie Theorem <ref> in a future paper where we will define a universal Lipschitz path space over ^̋1 and prove requisite properties. In particular, Corollary <ref> reports a necessary property to define the metric structure on the universal Lipschitz path space. Also, the existence of well-behaved homotopies shown in Lemma <ref> will be a useful tool to show that the universal Lipschitz path space satisfies the unique path lifting property. Acknowledgment. The author wishes to thank Chris Gartland for his invaluable input on this paper, in particular his suggestion to use Arzelà-Ascoli theorem. In additon to Gartland, the author would also like to thank Fedya Manin, David Ayala, Lukas Geyer, and Carl Olimb for their input, thoughts, and support as this paper came together. § BACKGROUND §.§ Homotopy, geodesics, and metric trees Throughout this paper, I=[0,1] is the closed interval endowed with the Euclidean metric. All paths will have domain I. For a metric space M and a path α:I→ M, the length of the path α is denoted M(α). For metric spaces A and M, the Lipschitz constant of a Lipschitz function f:A→ M will be denoted by (f). As we proceed, we will endow I× I with the L^1 metric: for (s,t),(s',t')∈ I× I, d^1((s,t),(s',t'))=|s-s'|+|t-t'|. The metric d^1 is Lipschitz equivalent to the Euclidean metric on I× I. Let M be a metric space. Two Lipschitz paths α,β:I⟶ M are homotopic rel endpoints if the initial points α(0)=β(0) and end points α(1)=β(1) of the paths agree and there exists a Lipschitz map H:I× I→ M such that H|_I×{0}=α, H|_I×{1}=β, H|_{0}× I=α(0),and H|_{1}× I=α(1). The map H is a homotopy from α to β. For a Lipschitz path α, the class of all Lipschitz paths homotopic rel endpoints to α is denoted [α] and is referred to as the homotopy class of α. The homotopy classes of loops based at a point x_0∈ M are the elements of the first Lipschitz homotopy group 1(M,x_0) of the metric space M. For the complete definition of Lipschitz homotopy groups and the initial study of 1(^̋1), see <cit.>. Another example of studying first Lipschitz homotopy groups of purely 2-unrectifiable metric spaces can be found in <cit.> where contact 3-manifolds are considered. Let (M,d) be a metric space. Let x,y∈ M and let α: I→ M be a path from α(0)=x to α(1)=y. The path α is arc length parametrized if for any t,t'∈ I, M(.α|_[t,t'])=M(α) |t'-t|. The path α is a shortest path from x to y if M(α)=d(x,y). The path α is a geodesic from x to y if for any t,t'∈ I, d(α(t),α(t'))=d(x,y) |t'-t|. Every geodesic is a shortest path between its endpoints and is arc length parametrized. Thus, every geodesic is Lipschitz with Lipschitz constant equal to its length. We will primarily be discussing geodesics with reference to metric trees. Metric trees were originally introduced in <cit.>. For a selection of results concerning metric trees, see <cit.>, <cit.>, and <cit.>. A non-empty metric space T is a metric tree if for any x,x'∈ T, there exists a unique arc joining x and x' and there is a geodesic γ from x to x'. A subset T'⊂ T of a metric tree is a subtree if T' is a metric tree with reference to the metric on T restricted to T'. §.§ Wenger and Young's factorization through a metric tree We make significant use of the work of Wenger and Young in <cit.> to show the existence of a length minimizing representative, in particular the following factorization: Let A be a quasi-convex metric space with quasi-convexity constant C and with 1(A)=0. Let furthermore M be a purely 2-unrectifiable metric space. Then every Lipschitz map f:A→ M factors through a metric tree T, A[rr, "f"] [dr, "ψ"'] M, T [ur, "φ"'] where (ψ)=C(f) and (φ)=1. We include some details of their work presently. In the proof of Theorem <ref> in <cit.>, Wenger and Young define the following pseudo-metric on A: d_f(a,a')inf{M(f∘ c) | c is a Lipschitz path in A from a to a'} where a,a'∈ A. The metric tree is then defined as a quotient space T A/∼, where the equivalence relation is given by a∼ a' if and only if d_f(a,a')=0. The metric on T is then d_T([a],[a']) d_f(a,a'). The map ψ is the quotient map, ψ(a)=[a]. The original function f is constant on equivalence classes. As such, the map φ([a])=f(a) is well-defined. § LENGTH MINIMIZERS OF HOMOTOPY CLASSES IN PURELY 2-UNRECTIFIABLE METRIC SPACES §.§ Building a desirable homotopy For the remainder of the paper, let M be a purely 2-unrectifiable metric space with metric d. Let α and β be Lipschitz homotopic paths. Moreover, assume that (α)=M(α) via possible reparametrization. We use Theorem <ref> and the definition of the metric tree to fashion a desirable Lipschitz homotopy from the path α to a Lipschitz path β' whose length is less than or equal to the length of β and whose Lipschitz constant is bounded by the length of α. Furthermore, the desirable homotopy will have Lipschitz constant equal to the length of α. This homotopy will be used to show the existence of a length minimizer in each homotopy class in a purely 2-unrectifiable metric space. Let H:I× I→ M be a homotopy from α to β. So, H|_I×{0}=α and H|_I×{1}=β. Since I× I is a geodesic space and Lipschitz simply connected, Theorem <ref> guarantees the Lipschitz map H factors through a metric tree T: I×I[rr, "H"] [dr, "ψ"'] M, T [ur, "φ"'] where (ψ)=H and (φ)=1. Note that, since H|_{0}× I=α(0) and H|_{1}× I=α(1), the restricted maps ψ|_{0}× I=ψ(0,0) and ψ|_{1}× I=ψ(1,0) are constant. Though the map ψ is (H)-Lipschitz, the restriction of the map ψ to I×{0} is at most M(α)-Lipschitz: (ψ|_I×{0})≤M(α). Let t,t'∈ I where t<t'. Using the definition of the metric on T, d_T(ψ(t,0),ψ(t',0)) = d_T([(t,0)],[(t',0)]) = d_H((t,0),(t',0)) = inf{M(H∘ c) | c is a path from (t,0) to (t',0)}. Now, selecting the inclusion c=(𝕀,0):[t,t']↪ I× I which is a Lipschitz path from (t,0) to (t',0), yields that d_T(ψ(t,0),ψ(t',0))≤M(H∘(𝕀,0):[t,t']↪ M)=M(α|_[t,t']). Since α is M(α)-Lipschitz, we have the following string of inequalities: d_T(ψ(t,0),ψ(t',0)) ≤ M(α|_[t,t']) ≤ (α) |t-t'| = M(α) |t-t'|. When defining the new homotopy, our focus in the metric tree T will be T'(ψ|_I×{0}), the image of the restriction in Lemma <ref>, which is a subtree of T. Note that every element of the subtree T' can be written as [(t,0)] for some t∈ I. The subtree T' has finite diameter bounded by the length of the path α, as is now shown. (T')≤M(α). (T') = sup_[(t,0)],[(t',0)]∈ T' d_T([(t,0)],[(t',0)]) = sup_t,t'∈ I d_H((t,0),(t',0)) = sup_t,t'∈ I inf_c M(H∘ c) ≤ sup_t,t'∈ I M(H∘(𝕀,0):[t,t']→ M) = sup_t,t'∈ I M(α|_[t,t']) = M(α). Now, let γ:I→ T be the geodesic in T from ψ(0,0) to ψ(1,0). Since γ is a geodesic, for any t,t'∈ I, d_T(γ(t), γ(t'))=d_T(ψ(0,0), ψ(1,0)) |t-t'|. Define a new path β':I→ M by β'(t)=φ∘γ(t). As will now be shown, the length of β' is bounded above by the length of the path β, the image of β' is a subset of the image of α, and the Lipschitz constant for β' is bounded above by the length of the initial path α. M(β')≤M(β). If ψ(0,0)=ψ(1,0), then the geodesic γ is a constant path, as is the path β'. The desired result is then immediate. Assume ψ(0,0)≠ψ(1,0). Then the geodesic γ joining these distinct points is non-constant and injective. Let 0=t_0<t_1<t_2<…<t_n+1=1 be a partition of the interval I. Then, as will be argued below, there is a partition 0=t_0^*<t_1^*<t_2^*<…<t_n+1^*=1 such that β'(t_i)=β(t_i^*). Now γ is a geodesic from ψ(0,0)=ψ(0,1) to ψ(1,0)=ψ(1,1) and the map ψ|_I×{1} is a path in the metric tree T with the same initial and terminal points as γ. Thus, the image of γ is a subset of the image of ψ|_I×{1}. Furthermore, for each i=1,…,n, there is a time t_i^*∈ I such that ψ(t_i^*,1)=γ(t_i) and ψ(t,1)≠γ(t_i) for all t>t_i^*, that is, t_i^* is the last time the path ψ|_I×{1} visits the point γ(t_i). Thus, β'(t_i)=φ(γ(t_i))=φ(ψ(t_i^*,1))=β(t_i^*). Let i<j. Suppose t_i^*≥ t_j^*. If t_i^*=t_j^*, then γ(t_i)=γ(t_j), contradicting that the geodesic γ is injective. Assume t_i^*>t_j^*. Then, the resticted path ψ|_[t_i^*,1]×{1} begins at γ(t_i) and ends at γ(1) and therefore travels through the point γ(t_j), contradicting that t_j^* is the last time that ψ|_I×{1} visits that point γ(t_j). Therefore, t_i^*<t_j^* for all i<j. We thus have attained the desired partition. So, for each partition 0=t_0<t_1<t_2<…<t_n+1=1, there exists a partition 0=t_0^*<t_1^*<t_2^*<…<t_n+1^*=1 such that ∑_i=0^n+1d(β'(t_i),β'(t_i+1))=∑_i=0^n+1d(β(t_i^*),β(t_i+1^*)) Taking supremum over all partitions 0=t_0<t_1<t_2<…<t_n+1=1, we arrive at M(β')≤M(β). (β')⊂(α). The geodesic γ is a path from ψ(0,0) to ψ(1,0), as is the map ψ|_I×{0}. Since γ is a geodesic in the metric tree T, the image of the geodesic is a subset of the image of any path with the same initial and terminal points. Thus, (γ)⊂(ψ|_I×{0}). Therefore, (β')=(φ∘γ)⊂(φ∘ψ|_I×{0})=(α). (β')≤M(α). Let t, t'∈ I. Using that (φ)=1 as well as (<ref>) and Lemma <ref>, we have the following inequalities: d(β'(t), β'(t')) = d(φ(γ(t)), φ(γ(t'))) ≤ d_T(γ(t)), γ(t')) = d_T(ψ(0,0), ψ(1,0))| t-t'| ≤ (T')|t-t'| ≤ M(α)|t-t'|. We will now construct a homotopy H' from the initial path α to the new path β' which has Lipschitz constant (H')=M(α). Let t∈ I. There is a geodesic g_t:I→ T from g_t(0)=ψ(t,0) to g_t(1)=γ(t) where, for all s, s'∈ I, d_T(g_t(s), g_t(s')) = d_T(ψ(t,0), γ(t))|s-s'|. Since points ψ(0,0)=γ(0) are equal, the geodesic g_0(s)=ψ(0,0) is constant. Similarly, the geodesic g_1(s)=ψ(1,0) is constant. Thus, the function g:I× I→ T given by g(t,s) g_t(s) is a homotopy from path ψ|_I×{0} to geodesic γ provided g is Lipschitz. We show that g is a Lipschitz map with Lipschitz constant bounded by M(α) and that the image of g is a subset of the image (ψ|_I×{0}). (g)≤M(α). Let (t,s), (t',s')∈ I× I. First, consider d_T(g_t(s), g_t'(s)). Fix t and t'. As s∈ I varies, D(s) d_T(g_t(s), g_t'(s)) is a function from I to . By properties of metric trees, there exists s_0∈ I such that the restriction D|_[0,s_0] is decreasing and the restriction D|_[s_0,1] is increasing. Thus, the maximum of the function D occurs when s=0 or s=1. Now, by Lemma <ref>, D(0)   =   d_T(g_t(0), g_t'(0)) = d_T(ψ(t,0), ψ(t',0)) ≤ M(α)|t-t'|. Also, by (<ref>) and Lemma <ref>, D(1)   =   d_T(g_t(1), g_t'(1)) = d_T(γ(t), γ(t')) = d_T(γ(0), γ(1))|t-t'| ≤ (T')|t-t'| ≤ M(α)|t-t'|. Therefore, for any s∈ I, d_T(g_t(s), g_t'(s)) = D(s) ≤M(α)|t-t'|. Now, consider the value d_T(g_t(s), g_t(s')). Since γ is a geodesic from ψ(0,0) to ψ(1,0) and T'=(ψ|_I×{0})⊂ T is a subtree containing these points, γ(t)∈ T'. Thus, by (<ref>) and Lemma <ref>, d_T(g_t(s), g_t(s')) = d_T(ψ(t,0), γ(t))|s-s'| ≤ (T')|s-s'| ≤ M(α)|s-s'|. Therefore, we conclude that d_T(g_t(s), g_t'(s')) ≤ d_T(g_t(s), g_t'(s)) + d_T(g_t'(s), g_t'(s')) ≤ M(α)|t-t'| + M(α)|s-s'| = M(α) d^1((t,s), (t',s')). (g)⊂(ψ|_I×{0}). Let (t,s)∈ I× I. The path g_t is a geodesic from ψ(t,0) to γ(t). Since γ is a geodesic from ψ(0,0) to ψ(1,0) and T'=(ψ|_I×{0}) is a subtree containing these points, γ(t)∈ T'. Thus, since g_t is a geodesic between points ψ(t,0), γ(t)∈ T' and T' is a subtree, g_t(s)∈ T' for all s∈ I. Thus, (g)⊂(ψ|_I×{0}). We are now ready to define the new homotopy H':I× I→ M by H'(t,s)φ∘ g(t,s). The function H' is indeed a homotopy from α to β' as [ H'(t,0) = φ(g_t(0)) = φ(ψ(t,0)) = α(t),; H'(t,1) = φ(g_t(1)) = φ(γ(t)) = β'(t),; H'(0,s) = φ(g_0(s)) = φ(ψ(0,0)) = α(0),; H'(1,s) = φ(g_1(s)) = φ(ψ(1,0)) = α(1). ] Moreover, since (φ)=1 and, by Lemma <ref>, for (s,t), (s',t')∈ I× I, d(H'(t,s), H'(t',s')) = d(φ(g(t,s)), φ(g(t',s'))) ≤ d_T(g(t,s), g(t',s')) ≤ M(α) d^1((t,s), (t',s')). Therefore, (H')≤M(α). In fact, since (H|_I×{0})=(α)=M(α), we have that (H')=M(α). Also, an immediate consequence of Lemma <ref> is that (H')⊂(α). We have thus defined a Lipschitz homotopy with all of the desired properties, which are collected in the following lemma. Let M be a purely 2-unrectifiable space. Given Lipschitz paths α:I→ M and β:I→ M that are homotopic rel endpoints, there exists a Lipschitz map H':I× I→ M and a Lipschitz path β':I→ M such that: * the map H' is a homotopy from α to β', * (H')=M(α), * (H')⊂(α), * (β')≤M(α), * (β')⊂(α), and * M(β')≤M(β). §.§ Finding the length minimizer of a homotopy class We now prove the primary result of the paper: the existence of a length minimizer in any homotopy class of paths in a purely 2-unrectifiable metric space. We use Lemma <ref> to fashion a sequence of Lipschitz paths in a given homotopy class, as well as associated homotopies, that have a uniform bound on their Lipschitz constants and then apply Arzelà-Ascoli theorem to find the length minimizer. Let M be a purely 2-unrectifiable metric space. For any homotopy class [α] of Lipschitz paths in M, there exists a length minimizing Lipschitz path α∈[α] where M(α)=inf{M(α) | α∈[α]}. Let M be a purely 2-unrectifiable metric space. Let [α] be a homotopy class of Lipschitz paths and define inf{M(α) | α∈[α]} to be the infimum of all lengths of paths in [α]. Select a Lipschitz path α_1∈[α] such that M(α_1)≤+1. Via possible reparametrization, the Lipschitz constant of α_1 can be assumed to be equal to its length: (α_1)=M(α_1). Note also that the image of α_1 is compact. For each subsequent natural number n, let α_n∈[α] be a Lipschitz path such that M(α_n)≤+1/n. Furthermore, via Lemma <ref>, we can assume that (α_n)≤M(α_1) and (α_n)⊂(α_1). Additionally, there is a homotopy H_n:I× I→ M from α_1 to α_n such that (H_n)≤M(α_1) and (H_n)⊂(α_1). Now, for any n∈, (α_n)≤+1. Since the images of the paths in the sequence (α_n) are subsets of the compact set (α_1), by Arzelà-Ascoli theorem, there exists a subsequence (α_n_k) that uniformly converges to a Lipschitz path α. By lower semi-continuity of the length measure, M(α)≤lim inf_kM(α_n_k). In fact, due to how the sequence (α_n) was selected, M(α)≤. We now want to show that α∈[α]. Associated to the subsequence (α_n_k), there is a sequence of homotopies (H_n_k) such that (H_n_k)≤+1 for each homotopy in the sequence. Since (H_n_k)⊂(α_1) for each n_k and (α_1) is compact, by Arzelà-Ascoli theorem, there exists a subsequence (H_n_k_j) that converges uniformly to a Lipschitz map H:I× I→ M. Now, H|_I×{0}=α_1 since H_n_k_j|_I×{0}=α_1 for all n_k_j. Also, since the paths H_n_k_j|_I×{1}=α_n_k_j converge uniformly to α, then H|_I×{1}=α. So, the map H is a homotopy from α_1 to α. Therefore, α∈[α] and thus M(α)=. §.§ Consequences of the existence of a length minimizer A length minimizer α∈[α] can be thought of as the core of the homotopy class [α] where the extraneous branches of the paths in the class have been pruned in the sense that the image of α is a subset of the image of any path contained in [α], as is now shown. A consequence of Theorem <ref> is that a length minimzer for a homotopy class is unique up to reparametrization. Let M be a purely 2-unrectifiable metric space and let [α] be a homotopy class of Lipschitz paths in M with length minimizer α∈[α]. Additionally, assume that the length minimzer α is arc length parametrized. Let α:I→ M be a Lipschitz path that is homotopic rel endpoints to α. Then the Lipschitz path α' produced by Lemma <ref> is equal to the length minimizer α. Furthermore, the image of a length minimizer α is a subset of the image of α, that is, (α)⊂(α). Let H:I× I→ M be a Lipschitz homotopy from α to α. By Theorem <ref>, the map H factors through a metric tree T: I×I[rr, "H"] [dr, "ψ"'] M. T [ur, "φ"'] We now show that the path ψ|_I×{1} in the metric tree T is the geodesic from ψ(0,1) to ψ(1,1). Let t,t'∈ I where t<t' and let c be a Lipschitz path in I× I from (t,1) to (t',1). Since c is homotopic to the inclusion (𝕀,1):[t,t']↪ I× I, the paths H∘ c and H∘(𝕀,1)=α|_[t,t'] are homotopic. Since α is the length minimizer for [α], the restriction α|_[t,t'] is also a length minimizer in its homotopy class. Thus, M(α|_[t,t'])≤M(H∘ c). Therefore, by the definition of the metric on T, d_T(ψ(t,1),ψ(t',1))=M(α|_[t,t']) and in particular, d_T(ψ(0,1),ψ(1,1))=M(α). Now, since α is arc length parametrized, d_T(ψ(t,1),ψ(t',1)) = M(α|_[t,t']) = M(α) |t'-t| = d_T(ψ(0,1),ψ(1,1)) |t'-t|. Therefore, the path ψ|_I×{1} is indeed the geodesic from ψ(0,1) to ψ(1,1). From the argument of Lemma <ref>, the path α' is equal to the geodesic in T from ψ(0,0)=ψ(0,1) to ψ(1,0)=ψ(1,1) post-composed by φ. As the geodesic in discussion is ψ|_I×{1}, we have that for all t∈ I, α(t)=φ∘ψ(t,1)=α'(t). That (α)⊂(α) follows quickly from α factoring through a geodesic segment. Indeed, for t∈ I, the point ψ(t,1)∈ T is in the geodesic segment (ψ|_I×{1}) connecting ψ(0,0)=ψ(0,1) to ψ(1,0)=ψ(1,1). As the image (ψ|_I×{0})⊂ T is a subtree containing these points, the geodesic segement (ψ|_I×{1})⊂(ψ|_I×{0}) is a subset of the subtree. Hence, there exists t'∈ I such that ψ(t,1)=ψ(t',0). Therefore, α(t)=φ∘ψ(t,1)=φ∘ψ(t',0)=α(t'). Thus, (α)⊂(α) as desired. Of note, in the proof of Theorem <ref> we have shown that given an arc length parametrized length minimizer and any homotopy of the length minimzer, the length minimzer factors through a geodesic segment in the metric tree generated by the homotopy via Theorem <ref>. An immediate consequence of Theorem <ref> is that for every point in a purely 2-unrectifiable metric space, only the trivial class in the first Lipschitz homotopy group can be represented by a loop within each neighborhood of the point, as is now shown. Let M be a purely 2-unrectifiable metric space and let x∈ M. Let [α]∈1(M,x) be a homotopy class of loops based at x such that for every open neighborhood U⊂ M of the point x, there exists a Lipschitz loop α_U∈[α] based at x whose image is contained in U. Then [α] is the trivial homotopy class, [α]=[x]. Let [α]∈1(M,x) be a homotopy class of loops based at x such that for every open neighborhood U⊂ M of the point x, there exists a Lipschitz loop α_U∈[α] based at x whose image is contained in U. Then, by Theorem <ref>, [α] has a length minimizer α and, by Theorem <ref>, the image of the length minimizer α is a subset of every neighborhood U of x. Therefore, α is the constant loop at x and thus [α]=[x]. In the language of <cit.>, every point in a purely 2-unrectifiable metric space is non-singular. The harmonic archipelago is an instructive example of a space wherein not every point is non-singular. See Example 1.1 in <cit.>. In order to define the lifted metric on the universal Lipschitz path space, it is necessary that for every point in the underlying metric space, only the trivial class in the first Lipschitz homotopy group can be represented by a loop within each neighborhood of the point. As such, Corollary <ref> is necessary when discussing the universal Lipschitz path space of a purely 2-unrectifiable metric space such as the Heisenberg group ^̋1. plain
http://arxiv.org/abs/2306.02988v1
20230605160052
Scaling limits of planar maps under the Smith embedding
[ "Federico Bertacco", "Ewain Gwynne", "Scott Sheffield" ]
math.PR
[ "math.PR", "math-ph", "math.MP" ]
Integrated Sensing, Computation, and Communication for UAV-assisted Federated Edge Learning Yao Tang, Guangxu Zhu, Wei Xu, Man Hon Cheung, Tat-Ming Lok, and Shuguang Cui Yao Tang and Tat M. Lok are with the Department of Information Engineering, the Chinese University of Hong Kong (CUHK), Hong Kong (e-mail: {ty018, tmlok}@ie.cuhk.edu.hk). Guangxu Zhu is with Shenzhen Research Institute of Big Data, Shenzhen, China (e-mail: [email protected]). Wei Xu is with the National Mobile Communications Research Lab, Southeast University, Nanjing 210096, China (e-mail: [email protected]). Man Hon Cheung is with the Department of Computer Science, City University of Hong Kong, Hong Kong (e-mail: [email protected]). Shuguang Cui is with the School of Science and Engineering (SSE), the Future Network of Intelligence Institute (FNii), and the Guangdong Provincial Key Laboratory of Future Networks of Intelligence, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China. He is also with Peng Cheng Laboratory (e-mail: [email protected]). July 31, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The Smith embedding of a finite planar map with two marked vertices, possibly with conductances on the edges, is a way of representing the map as a tiling of a finite cylinder by rectangles. In this embedding, each edge of the planar map corresponds to a rectangle, and each vertex corresponds to a horizontal segment. Given a sequence of finite planar maps embedded in an infinite cylinder, such that the random walk on both the map and its planar dual converges to Brownian motion modulo time change, we prove that the a priori embedding is close to an affine transformation of the Smith embedding at large scales. By applying this result, we prove that the Smith embeddings of mated-CRT maps with the sphere topology converge to γ-Liouville quantum gravity (γ-LQG). § INTRODUCTION §.§ Motivation Over the past few decades, there has been a large amount of interest in the study of random planar maps, i.e., graphs embedded in the plane viewed modulo orientation-preserving homeomorphisms. Since the foundational work of Polyakov in the context of bosonic string theory <cit.>, it has been believed that various types of random planar maps converge, in various topologies, to limiting random surfaces called Liouville quantum gravity (LQG) surfaces. The rigorous mathematical study of LQG has been explored, e.g., in works by Duplantier and Sheffield <cit.> and Rhodes and Vargas <cit.>. Roughly speaking, LQG surfaces can be thought of as random two-dimensional Riemannian manifolds parameterized by a fixed underlying Riemann surface, indexed by a parameter γ∈ (0,2]. These surfaces are too rough to be Riemannian manifolds in the literal sense, but one can still define, e.g., the associated volume form (area measure) and distance function (metric) via regularization procedures <cit.>. Many properties of the γ-LQG area measure are well-known <cit.>. One way of formulating the convergence of random planar maps toward LQG surfaces is to consider so-called discrete conformal embeddings of the random planar maps. Here, a discrete conformal embedding refers to a particular way of “drawing” the map in the plane, which is in some sense a discrete analog of the Riemann mapping. Suppose we have a random planar map with n vertices, along with a discrete conformal embedding of the map that maps each vertex to a point in . This embedding creates a natural measure on the plane, with each vertex given a mass of 1/n. In many settings, it is natural to conjecture that as n tends to infinity, the measure should converge weakly to the γ-LQG area measure, with the parameter γ depending on the particular planar map model under consideration. Additionally, the random walk on the embedded map is expected to converge in law to two-dimensional Brownian motion modulo time parameterization (more precisely, the parameterized walk should converge to the so-called Liouville Brownian motion <cit.>). Several precise scaling limit conjectures for random planar maps toward LQG surfaces were formulated, e.g., in <cit.>. However, this very general convergence ansatz has only been rigorously proven in a few specific settings (see below). One of the challenges in formulating a general scaling limit result for the embedding of random planar maps is the existence of numerous discrete conformal embeddings that could be regarded as natural in some sense. We collect here some of the most commonly employed discrete conformal embeddings. * The circle packing (see <cit.> for a review), which represents the map as the tangency graph of a collection of non-overlapping circles[We refer to <cit.> for a proof of the fact that the circle packing for lattice approximations of planar domains gives an approximation of the Riemann mapping.]. * The Smith embedding (a.k.a. rectangle packing), which will be the focus of the present paper, was introduced by Brooks, Smith, Stone, and Tutte in <cit.>. It is another popular method of embedding planar graphs, and it is defined by means of a rectangle tiling of either a cylinder or a rectangle, and in which vertices of the planar map correspond to horizontal segments in the Smith embedding, and edges of the planar map correspond to rectangles in the Smith embedding[We refer to <cit.> for a proof of the fact that the Smith embedding for fine-mesh lattice graphs gives an approximation of the Riemann mapping.]. Several papers have studied properties of the Smith embedding of planar maps <cit.>. * Other examples of discrete conformal embeddings include the Tutte embedding <cit.>, the Cardy–Smirnov embedding <cit.>, and the Riemann uniformization embedding, obtained by viewing the planar map as a piecewise flat two-dimensional Riemannian manifold where the faces are identified with unit side length polygons. Some cases of the aforementioned conjecture, that LQG describes the scaling limit of random planar maps under discrete conformal embeddings, have been proven. For example, in <cit.>, Gwynne, Miller, and Sheffield established the convergence to γ-LQG under the Tutte embedding for a one-parameter family of random planar maps defined using pairs of correlated Brownian motions, known as the mated-CRT maps (see below for a definition of this family of random planar maps). Moreover, in <cit.>, the same authors proved that the Tutte embedding of the Poisson Voronoi tessellation of the Brownian disk converges to √(8/3)-LQG. In <cit.>, Holden and Sun proved that the scaling limit of uniformly sampled triangulations under the Cardy–Smirnov embedding converges to √(8/3)-LQG. Finally, let us also mention that in <cit.>, the authors studied the circle-packing of the mated-CRT map and showed that there are no macroscopic circles in the circle packing of this random planar map. Roughly speaking, the main goal of this paper is to provide a general convergence result for the Smith embedding of planar maps, which works whenever random walk on both the map and its dual approximate Brownian motion. §.§ Main result The main result of this paper concerns the scaling limit of general (random) planar maps under the Smith embedding. More precisely, we will show that for a sequence of finite planar maps satisfying an invariance principle assumption both on the map and on its dual, the a priori embedding is close to an affine transformation of the Smith embedding at large scales. We will then apply this result to prove the convergence of the Smith embeddings of mated-CRT maps to γ-LQG. One advantage of the version of the Smith embedding considered in this paper is that its definition is particularly natural for random planar maps without boundary. This is in contrast to other embeddings under which random planar maps have been shown to converge to LQG (such as the Tutte embedding <cit.> and the Cardy–Smirnov embedding <cit.>) which are most naturally defined for planar maps with boundary. Throughout the paper, we always write weighted planar map for a planar map with edge conductances. Moreover, throughout the article, we allow all of our planar maps to have multiple edges and self-loops. In order to state our main theorem, we need to introduce some notation. Given a planar graph , we denote the sets of vertices, edges, and faces of by , , and , respectively. We consider a doubly marked finite weighted planar map (, c, v_0, v_1), where v_0, v_1 ∈ are the two marked vertices, and c={c_e}_e∈ is a collection of positive unoriented weights called conductances. We assume that we are given a proper embedding of the map in the infinite cylinder _2π := /2π× in the sense of the following definition. An embedding of the quadruplet (, c, v_0, v_1) in the infinite cylinder _2π is said to be proper if: * the edges in are continuous and do not cross; * the graph is connected; * the two marked vertices v_0 and v_1 are mapped to -∞ and +∞, respectively, and they do not lie on the boundary of the same face. We observe that, if (, c, v_0, v_1) is properly embedded in _2π, then the set of vertices is contained in _2π; each edge in is a curve in _2π that does not cross any other edge, except possibly at its endpoints; and each face in is a connected component in _2π of the complement of the embedded graph . Since the two marked vertices are mapped to ±∞, this implies that there is an infinite face at each end of the cylinder _2π. In what follows, we use the convention to identify each vertex in with its a priori embedding, i.e., if x ∈ then we view x as a point in _2 π. Furthermore, for a set K⊂_2π, we write (K) := ł{ x∈ : x∈ K }̊ . We denote by (, ĉ) the weighted dual planar graph associated to (, c), where the conductance ĉ_ê of a dual edge ê ∈ is equal to the resistance of the corresponding primal edge e ∈, i.e., we set ĉ_ê := 1/c_e. We assume that (, ĉ) is properly embedded in the infinite cylinder _2π in the sense of the following definition. An embedding of the weighted dual planar graph (, ĉ) associated to (, c) in the infinite cylinder _2π is said to be proper if: * every vertex of is contained in a face of ; * every edge e of is crossed by a single edge ê of which joins the two faces incident to e. If an edge e ∈ is oriented, the orientation of the corresponding dual edge ê∈ can be obtained by rotating e counter-clockwise. As for the primal graph, given a set K ⊂_2π, we write (K) := ł{x̂∈ : x̂∈ K }̊ . §.§.§ Smith embedding We are now ready to provide a somewhat informal description of the Smith embedding of a given doubly marked finite weighted planar map (, c, v_0, v_1). The precise definition will be given in Section <ref>. As mentioned earlier, the Smith embedding of a planar map was first introduced by Brooks, Smith, Stone, and Tutte in <cit.>, and later generalized to infinite planar graphs by Benjamini and Schramm in <cit.>. The Smith embedding of the quadruplet (, c, v_0, v_1) is constructed by means of a tiling by rectangles of a finite cylinder _η := /η× [0, 1], where η is a positive number which depends on (,c) to be specified later. Each vertex x ∈ is represented by a horizontal line segment _x, each edge e ∈ by a rectangle _e, and each dual vertex x̂∈ by a vertical line segment _x̂. In particular, since each edge e ∈ corresponds to a rectangle in the tiling, we need to specify four coordinates for each edge. This is done by means of the voltage function : → [0, 1] and its discrete harmonic conjugate : →/η. The function is the unique function on which is discrete harmonic on ∖{v_0,v_1} (with respect to the conductances c) with boundary conditions given by (v_0) = 0 and (v_1) = 1, i.e., (x) = _xł(X hits v_1 before v_0)̊, ∀ x ∈, where _x denotes the law of random walk X^x on (,c) started from x. We refer to Subsection <ref> for more details. The function is the function on that satisfies the discrete Cauchy–Riemann equation associated to , i.e., it is the function on the set of dual vertices whose difference at the endpoints of each edge of is equal to the difference of at the endpoints of the corresponding primal edge times its conductance. As we will see in Subsection <ref>, the function is only defined modulo η and modulo an additive constant that can be fixed by imposing that is equal to zero on a chosen dual vertex. In particular, the choice of the additive constant of fixes the rotation of the cylinder in which the tiling takes place. Now, we can specify the various objects involved in the definition of the Smith embedding. * For each edge e ∈, the rectangle _e corresponds to the rectangle on _η such that the height coordinates of the top and bottom sides are given by the values of at the endpoints of e, and the width coordinates of the left and right sides of _e are given by the values of at the endpoints of the corresponding dual edge ê. * For each vertex x ∈, the horizontal segment _x corresponds to the maximal horizontal segment which lies on the boundaries of all the rectangles corresponding to the edges incident to x. * For each dual vertex x̂∈, the vertical segment _x̂ corresponds to the maximal vertical segment which is tangent with all rectangles corresponding to primal edges surrounding x̂. We call the map : ∪∪→_η such that (e) := _e, (x) := _x, and (x̂) := _x̂ the tiling map associated to the quadruplet (, c, v_0, v_1). We refer to Figure <ref> for a diagrammatic illustration of the tiling map associated to a given quadruplet (, c, v_0, v_1). We define the Smith embedding associated to the quadruplet (, c, v_0, v_1) as the function from to /η× [0, 1] given by (x) := (_x), ∀ x ∈, where (_x) corresponds to the middle point of the horizontal line segment _x[This definition of the Smith embedding is somewhat arbitrarily. Indeed, for each x ∈, one can define (x) to be any arbitrary point inside the horizontal segment _x.]. We refer to Subsection <ref> for precise definitions. §.§.§ Assumptions and statement of the main result To state our main result, we need to consider a sequence of doubly marked finite weighted planar maps ł{(^n, c^n, v_0^n, v_1^n)}̊_n ∈, and the sequence of associated weighted dual planar graphs {(^n, ĉ^n)}_n ∈. We make the following assumptions. * (Cylindrical embedding) For each n ∈, the quadruplet (^n, c^n, v_0^n, v_1^n) is properly embedded in the infinite cylinder _2π in the sense of Definition <ref>. Furthermore, the associated weighted dual planar graph (^n, ĉ^n) is also properly embedded in _2π in the sense of Definition <ref>. * (Invariance principle on the primal graphs) For each n ∈, view the embedded random walk on (^n, c^n), stopped when it hits either v_0^n or v_1^n, as a continuous curve in _2π obtained by piecewise linear interpolation at constant speed. For each compact subset K ⊂_2π and for any z ∈ K, the law of the random walk on (^n, c^n) started from the vertex x_z^n ∈^n nearest to z weakly converges as n →∞ to the law of the Brownian motion on _2π started from z with respect to the local topology on curves viewed modulo time parameterization specified in Subsection <ref>, uniformly over all z ∈ K. * (Invariance principle on the dual graphs) For each n ∈, view the embedded random walk on (^n, ĉ^n) as a continuous curve in _2π obtained by piecewise linear interpolation at constant speed. For each compact subset K ⊂_2π and for any z ∈ K, the law of the random walk on (^n, ĉ^n) started from the vertex x̂_z^n ∈^n nearest to z weakly converges as n →∞ to the law of the Brownian motion on _2π started from z with respect to the local topology on curves viewed modulo time parameterization specified in Subsection <ref>, uniformly over all z ∈ K. In what follows, given a point x ∈_2π, we write (x) ∈ [0, 2π) for its horizontal coordinate and (x) ∈ for its height coordinate. Similarly, if x ∈_η, then (x) ∈ [0, η) denotes its horizontal coordinate and (x) ∈ [0, 1] denotes its height coordinate. We are now ready to state our main theorem. Consider a sequence {(^n, c^n, v_0^n, v_1^n)}_n ∈ of doubly marked finite weighted planar maps and let {(^n, ĉ^n)}_n ∈ be the sequence of associated weighted dual planar graphs. Assume that assumptions <ref>, <ref>, <ref> are satisfied. For each n ∈, let _n : ^n →_η_n be the Smith embedding associated with the quadruplet (^n, c^n, v_0^n, v_1^n) as specified in (<ref>). There exist sequences {c^_n}_n ∈, {b^_n}_n ∈, {b^_n}_n ∈⊂ such that, if we let T_n : _η_n→_2π be the affine transformation of the form (T_n x) := (2π/η_n(x) + b_n^) 2π and (T_n x) := c_n^(x) + b^_n, ∀ x ∈_η_n, then, for all compact sets K ⊂_2π, it holds that lim_n →∞sup_x ∈^n(K)_̣2πł(T_n _n(x), x)̊ = 0, where _̣2π denotes the Euclidean distance on the cylinder _2π. Theorem <ref> tells us that in order to say that the Smith embedding of (^n, c^n, v_0^n, v_1^n) is close to a given a priori embedding (up to translation and scaling), we only need to know a certain invariance principle for random walk under the a priori embedding. This result is in some ways not surprising, since it is natural to expect that if a simple random walk (and its dual) approximate Brownian motion, then discrete harmonic functions (and their conjugate duals) should approximate continuum harmonic functions. However, showing that this is actually true in the limit, and that the convergence is to the right continuum harmonic functions, will require some new coupling tricks and some careful boundary behavior estimates, which we hope may prove useful in other settings as well. More precisely, the particular statement we obtain is far from obvious a priori, for two main reasons. * Our hypotheses only concern the macroscopic behavior of the random walk on (^n,c^n) in the bulk of the cylinder. We do not need any hypotheses about how the random walk behaves when it gets close to the marked vertices v_0^n and v_1^n. This may seem surprising at first glance since one could worry, e.g., that the structure of (^n,c^n) in small neighborhoods of v_0^n and v_1^n makes it much easier for random walk to hit v_0^n than for it to hit v_1^n, and so the height coordinate function ^n is close to zero on all of ^n. What allows us to get around this is the scaling and translation sequences {c^_n}_n ∈ and {b^_n}_n ∈. We refer to Subsection <ref> for more details. * The width coordinate ^n is discrete harmonic but does not admit a simple direct description in terms of the random walk on (^n,ĉ^n). For this reason, a fair amount of work is required to get from the invariance principle for this random walk to a convergence statement for ^n. We refer to Subsection <ref> for more details. We remark that the Smith embedding can be very far from the identity near the ends of the cylinder: what is interesting, and perhaps surprising, is the generality in which we can show that the “bad behavior” gets “smoothed out” in the middle of the cylinder. This is apparent in the simulations presented in Figure <ref>. As we will discuss in the next section, one application of Theorem <ref> is the convergence of the mated-CRT map with the sphere topology to LQG under the Smith embedding. More generally, Theorem <ref> reduces the problem of proving the convergence to LQG under the Smith embeddings for other types of random planar maps to the problem of finding some a priori embeddings of the map and its dual under which the counting measure on vertices converges to the LQG measure and the random walk on the map converges to Brownian motion modulo time parameterization. §.§ Application to the mated-CRT map Mated-CRT maps are a one-parameter family of random planar maps constructed and studied, e.g., in <cit.>. The mated-CRT maps are parameterized by a real parameter γ∈ (0, 2) and are in the universality class of γ-LQG. In this paper, we will be interested in mated-CRT maps with the sphere topology. For each n ∈ and γ∈ (0, 2), the the n-mated-CRT map with the sphere topology is the random planar triangulation ^n with vertex set given by ^n := 1/n∩ (0, 1], and an edge set defined by means of a condition involving a pair of linear Brownian motions. More precisely, consider a two dimensional Brownian motion (L, R) with covariance matrix given by (L_t) = (R_t) = |t|, (L_t, R_t) = -cos(πγ^2/4)|t|, and conditioned to stay in the first quadrant for one unit of time and end up at (0, 0), i.e., (L, R) is an excursion. Then, two vertices x_1, x_2 ∈^n are connected by an edge if and only if either 1max{inf_t ∈ [x_1 - 1/n, x_1] L_t, inf_t ∈ [x_2 - 1/n, x_2] L_t } ≤inf_t ∈ [x_1, x_2 - 1/n] L_t, or max{inf_t ∈ [x_1 - 1/n, x_1] R_t, inf_t ∈ [x_2 - 1/n, x_2] R_t } ≤inf_t ∈ [x_1, x_2 - 1/n] R_t. The vertices in ^n are connected by two edges if |x_1 - x_2| ≠ 1/n and both the conditions in (<ref>) hold. We observe that the condition for L in (<ref>) is equivalent to the existence of a horizontal line segment below the graph of L whose end points are of the form (t_1,L_t_1) and (t_2,L_t_2) for t_1 ∈ [x_1 - 1/n, x_1] and t_2 ∈ [x_2 - 1/n,x_2], and similarly for R. This allows us to give an equivalent, more geometric, version of the definition of ^n. In particular, this procedure assigns a natural planar map structure to the mated-CRT map ^n, under which it is a triangulation. We refer to Figure <ref> for a diagrammatic explanation of this procedure. In <cit.>, Gwynne, Miller, and Sheffield proved that the Tutte embeddings of mated-CRT maps with the disk topology converge to γ-LQG. Thanks to our main theorem, we can prove an analogous result for the Smith embeddings of mated-CRT maps with the sphere topology. More precisely, for each n ∈, pick two marked vertices v_0^n, v_1^n ∈^n. Then, we can conformally map the sphere into the infinite cylinder _2π so that the marked points are mapped to ±∞, and (^n, v_0^n, v_1^n)[Here, each edge in ^n has unit conductance and so we do not specify the sequence of weights c as in the general case.] is properly embedded in _2π. Fix γ∈ (0, 2) and let {(^n, v_0^n, v_1^n)}_n ∈ be the sequence of doubly marked n-mated CRT map with the sphere topology embedded in _2π as specified above. There exists a sequence of random affine transformations {T_n}_n ∈ from _η_n to _2π of the form specified in the statement of Theorem <ref> such that, if we let μ_n be the push-forward with respect to the mapping z ↦ T_n z of the counting measure on the set _n(^n) scaled by 1/n, then we have the following convergences in probability as n →∞. * On each compact subset of _2π, the measure μ_n weakly converges to the γ-LQG measure associated to a doubly marked unit area quantum sphere parameterized by _2 π in such a way that its marked points are at ±∞, as defined in <cit.>. * On each compact subset of _2π, the image under the mapping z ↦ T_n z of the space-filling path on the Smith embedded mated-CRT map on _η_n obtained from the left-right ordering of the vertices converges uniformly with respect to the two-point compactification topology on the cylinder to space-filling SLE_κ on _2π, with κ = 16/γ^2, parameterized by γ-LQG mass. * For z ∈_2π, let x_z^n ∈^n be the vertex nearest to z. The conditional law given ^n of the image under the mapping z ↦ T_n z of the simple random walk on the Smith-embedded mated-CRT map started from ^n(x_z^n) and stopped when it hits one of the horizontal segments associated to the marked vertices weakly converges to the law of Brownian motion on _2 π started from z, modulo time parameterization and uniformly over all z in a compact subset of _2π. To conclude, let us point out that item <ref> of Theorem <ref> solves <cit.> for the case of mated-CRT maps. §.§ Outline Most of the paper is dedicated to proving Theorem <ref>, and it is organized as follows. In the first part of Section <ref>, we provide some background material on weighted planar graphs and the theory of electrical networks. We then move on to the precise construction of the tiling map and the definition of the Smith embedding in Subsection <ref>. As mentioned earlier, this is achieved by introducing two harmonic maps: one on the planar map itself and one on the associated dual planar map. In Section <ref>, we state and prove several properties of the Smith embedding. The most significant result of this section is Lemma <ref>, which heuristically states that the conditional expected horizontal winding of the Smith-embedded random walk given the vertical coordinate of the walk is equal to zero. This property plays a key role in proving our main theorem. In order to prove this intermediate result, we rely on Lemma <ref>, which essentially states that the conditional probability, given the vertical coordinate of the walk, that Smith-embedded random walk hits a certain horizontal line segment is proportional to its width. We point out that a similar result, but without conditioning on the vertical component of the walk, has been obtained by Georgakopoulos <cit.> in the setting of infinite weighted planar graphs. Section <ref> is the core of this article and contains the proof of Theorem <ref>, which can be divided into two main blocks: in Subsection <ref> we study the height coordinate function, and in Subsection <ref> we study the width coordinate function. Specifically, the main result of Subsection <ref> is Proposition <ref>, which roughly states that the height coordinate of the a priori embedding is asymptotically close to an affine transformation of the height coordinate of the Smith embedding. Similarly, the main result of Subsection <ref> is Proposition <ref>, which states the analogous fact for the width coordinate. We refer to Subsections <ref> and <ref> for the proof outlines of the height and width coordinate results, respectively. Finally, in Subsection <ref>, we show how to combine the results for the height coordinate and width coordinate to prove Theorem <ref>. In Section <ref>, we provide a brief introduction to the relationship between mated-CRT maps and LQG. We then demonstrate in Subsection <ref> that this family of random planar maps satisfies the assumptions of our main result. Specifically, in Subsection <ref>, we apply our result to show that the scaling limit of mated-CRT maps is γ-LQG, thereby proving Theorem <ref>. F.B. is grateful to the Royal Society for financial support through Prof. M. Hairer's research professorship grant RP\R1\191065. E.G. was partially supported by a Clay research fellowship. Part of this work was carried out during the Probability and Mathematical Physics ICM satellite conference at Helsinki in Summer 2022. We thank the organizers of this conference for their hospitality. § BACKGROUND AND SETUP §.§ Basic definitions §.§.§ Basic notation We write for the set of positive integers and _0 for the set of non-negative integers. Given n ∈ and j ∈_0, we let [n] := {1, …, n} and [n]_j := {j, …, n}, Furthermore, for n ∈ and j ∈_0, we write [x_n] to denote the collection of objects {x_1, …, x_n} and [x_n]_j to denote the collection of objects {x_j, …, x_n}. If a and b are two real numbers, we write a ≲ b if there is a constant C > 0, independent of the values of a or b and certain other parameters of interest, such that a ≤ C b, and we highlight the dependence of the implicit constants when necessary. If a and b are two real numbers depending on a variable x, we write a = o_x(b) if a/b tends to 0 as x →∞. We use the convention to identify ^2 with . In particular, given a point x ∈^2, we write (x) (resp. (x)) for its horizontal (resp. vertical) coordinate. §.§.§ Metric on curves modulo time parameterization For T_1, T_2 > 0, let P_1 : [0,T_1] →^2 and P_2 : [0,T_2] →^2 be two continuous curves defined on possibly different time intervals. We define ł(P_1,P_2)̊ := inf_ϕsup_t∈ [0,T_1]ł|P_1(t) - P_2(ϕ(t))|̊, where the infimum is taken over all increasing homeomorphisms ϕ : [0,T_1] → [0,T_2]. It is known that induces a complete metric on the set of curves viewed modulo time parameterization (see <cit.>). For curves defined for infinite time, it is convenient to have a local variant of the metric . Assume that P_1 : [0,∞) →^2 and P_2 : [0,∞) →^2 are two such curves. Then, for r > 0, let T_1,r (resp. T_2,r) be the first exit time of P_1 (resp. P_2) from the ball B(0, r) centred at 0 with radius r, or 0 if the curve starts outside B(0, r). We define ł(P_1,P_2)̊ := ∫_1^∞ e^-rł(1 ∧(P_1|_[0,T_1,r] , P_2|_[0,T_2,r])̊)ṛ. Moreover, we observe that given a sequence {P_n}_n ∈ of continuous curves defined for infinite time, then lim_n →∞(P_n , P) = 0, if and only if, for Lebesgue almost every r > 0, P_n stopped at its first exit time from B(0, r) converges to P stopped at its first exit time from B(0, r) with respect to the metric (<ref>). In the remaining part of the article, we also need to consider curves taking values in the infinite cylinder _2 π. We equip the spaces specified above, but with _2π in place of ^2, with the same metrics. It will be clear from the context whether the metric under consideration refers to curves in ^2 or in _2π. §.§.§ Graph notation Given a finite planar graph , besides the notation related to specified in the introduction, we need to introduce some further nomenclature. In particular, in what follows, we use e ∈ to denote both oriented and unoriented edges. An oriented edge e ∈ is oriented from its tail e^- to its head e^+. Furthermore, given a vertex x ∈, we write (x) for the set of vertices y adjacent to x, i.e., such that there exists an edge connecting x to y. For a vertex x ∈, we denote by (x) the set of edges in incident to x. For a fixed orientation of the edges in (x), we let ^↓(x) (resp. ^↑(x)) be the set of edges in (x) with heads (resp. tails) equal to x. Similar notation will also be used for the dual planar graph . Metric graph. We will need to consider the metric space associated to a planar graph which can be canonically built as follows. For each edge e ∈, we choose an arbitrary orientation of e and we let I_e be an isometric copy of the real unit interval [0, 1]. We define the metric space associated with to be the quotient of ∪_e ∈ I_e where we identify the endpoints of I_e with the vertices e^- and e^+, and we equip it with the natural path metric ^̣. More precisely, for two points x, y lying on an edge of , we define ^̣(x, y) to be the Euclidean distance between x and y. For points x, y lying on different edges, we use the metric given by the length of the shortest path between the two points, where distances are measured along the edges using the Euclidean distance. We can also define the dual metric graph , and the associated metric ^̣, in a similar way. §.§ Universal cover The concept of universal cover of a graph will play an important role in our analysis. If is a graph embedded in the infinite cylinder _2π, then there is a canonical way to define its lift ^† to the universal covering space of _2π. More precisely, consider the universal cover (^2, σ_2π) of _2π, where the covering map σ_2π : ^2 →_2π is defined by σ_2 π(t, x) := ł(e^i t, x)̊ , ∀ (t, x) ∈^2. Then, the lifted graph ^† can be constructed by taking every lift of every vertex and every edge of in _2π to the covering space ^2. We denote by ^† and ^† the set of vertices and edges of the lifted graph ^†, respectively. Moreover, we can also construct the lift of the dual graph to the universal covering space ^2 in a similar way, and we denote it by ^†. We adopt the following notational convention: if x ∈ is a vertex, then we denote by ∈^† a lift of x; if e ∈ is an edge, then we denote by ∈^† a lift of e; if x̂∈ is a dual vertex, then we denote by ∈^† a lift of x̂. Moreover, if (, c) is a finite weighted planar graph embedded in _2π, we can naturally assign to each lifted edge the conductance c^†_ := c_e, and we denote by (^†, c^†) the lifted weighted graph. By definition, the lifted graph ^† is periodic in the sense that if _1, _2 ∈^† are two points in ^2 such that (_1) = (_2) and |(_1) - (_2)| ∈_0, then σ_2π(_1) = σ_2π(_2). Finally, for a set K ⊂^2, we write ^†(K) := ł{∈^† : ∈ K}̊, ^†(K) := ł{∈^† : ∈ K}̊. Before proceeding, we recall the following simple result. A oriented path in is a collection of oriented edges e_1 ⋯ e_n in such that e_j^+ = e_j+1^-, for all j ∈ [n-1]. Furthermore, if also e_n^+ = e_1^-, then e_1 ⋯ e_n is called a oriented loop. Let e_1 ⋯ e_n be a oriented path in . Let _1 be a lift of e_1 to the lifted graph ^†, then there exists a unique path _1 ⋯_n in ^† such that _j is a lift of e_j, for all j ∈ [n]_2. The main advantage of working in the universal cover of the cylinder is that we can keep track of the winding of paths. Let 0 ≤ t_1 < t_2, consider a path P: [t_1, t_2] →_2 π, and let :[t_1, t_2]→^2 be a lift of P to the universal cover. We define the winding of P by letting _2 π(P) := ((t_2)) - ((t_1))/2 π . We say that P winds around the cylinder if |_2 π(P)| ≥ 1. We say that P does a noncontractible loop around the cylinder if there exist times t_1 ≤ s_1 < s_2 ≤ t_2 such that P|_[s_1, s_2] winds around the cylinder and P(s_1) = P (s_2). §.§ Random walks and electrical networks In this subsection, we briefly recall the main concepts in the theory of electrical networks and we refer to <cit.> for a complete introduction. Let (, c, v_0, v_1) be a doubly marked finite weighted planar graph properly embedded in the infinite cylinder _2π in the sense of Definition <ref> . The conductance of a vertex x ∈ is denoted by π(x) and it is defined to be the sum of the conductances of all the edges incident to x, i.e., π(x) := ∑_e ∈(x) c_e , ∀ x ∈ . Random walk. The random walk on (, c) is the discrete time Markov chain X = {X_n}_n ∈_0 with state space such that, for all n ∈_0, ł(X_n+1 = y | X_n = x)̊ = c_xy/π(x) , y ∈(x), 0 , otherwise. Given a vertex x ∈, we write _x and _x for the law and expectation of X started from x. Moreover, we may write X^x in order to emphasize that the random walk X is started from the vertex x ∈. With a slight abuse of notation, we will also denote with X = {X_t}_t≥0 the continuous time version of the random walk, where the continuous path is generated by piecewise linear interpolation at constant speed. If the conductance on every edge of the graph is equal to one, we call the random walk in this case simple random walk. We emphasize that, given a random walk X on (, c), we can canonically lift it to the lifted weighted planar graph (^†, c^†), and we denote the resulting walk by . If X^x is started from a point x ∈, then we need to specify the lift ∈σ_2π^-1(x) of x from which the lifted walk ^ is started from. Similar notation will be also adopted for the random walk on the dual graph. Estimate on the total variation distance. We now state and prove an elementary lemma for general weighted planar graphs which allows to compare the total variation distance of the exit positions from a set for two random walks started from two distinct points. Let (, c) be a finite weighted planar graph and let W ⊂. For x ∈, let X^x be the random walk on (, c) started from x and let τ_x be the first time that X^x hits W. Then, for x, y ∈∖ W, it holds that ł(X^x_τ_x, X^y_τ_y)̊≤ł(X^x|_[0, τ_x] does not disconnect y from W)̊, where denotes the total variation distance. The proof is a variant of <cit.> with the difference that one should consider a weighted spanning tree instead of a uniform spanning tree of the finite weighted planar graph (, c). For the reader's convenience, we gather here a proof. The lemma is a consequence of Wilson's algorithm. Consider the weighted spanning tree of the finite weighted planar graph (, c), where all vertices of W are wired to a single point. We recall that the weighted spanning tree is chosen randomly from among all the spanning trees with probability proportional to the product of the conductances along the edges of the tree. For x∈, let L^x be the unique path in from x to W. For a path P in , write LE(P) for its chronological loop erasure. By Wilson's algorithm (see <cit.>), we can generate the union L^x∪ L^y by using the following procedure. * Run X^y until time τ_y and generate the loop erasure LE(X^y|_[0,τ^y]). * Conditional on X^y|_[0,τ_y], run X^x until the first time τ̃_x that it hits either LE(X^y|_[0,τ_y]) or W. * Set L^x ∪ L^y = LE(X_y|_[0,τ_y]) ∪LE(X^x|_[0,τ̃_x]). Note that L_y = LE(X^y|_[0,τ_y]) in the above procedure. Interchanging the roles of x and y in the above procedure shows that L^x and LE(X^x|_[0,τ_x]) have the same distribution. When constructing L^x∪ L^y as described above, the points at which L^x and L^y hit W coincide if X^x hits LE(X^y|_[0,τ_y]) before reaching W. In particular, this occurs when X^x|_[0, τ_x] disconnects y from W. Thus, there is a coupling between LE(X^x|_[0,τ_x]) and LE(X^y|_[0,τ_y]), where the probability that these two loop erasures hit W at the same point is at least _x(X|_[0, τ_x] disconnects y from W). Now, by observing that X^x_τ_x corresponds to the point at which LE(X^x|_[0,τ_x]) first hit W, and similarly for y in place of x, we obtain the desired result. Electrical network. There is an extremely useful correspondence between random walks and Kirchhoff's theory of electric networks. Let (, c, v_0, v_1) be as above and suppose that every edge e ∈ is made of conducting wires with conductance equals to c_e. Connect a battery between v_1 and v_0 so that the voltage at v_1 is equal to one and the voltage at v_0 is equal to zero. Then certain currents will flow along the edges of the graph establishing the voltage at each vertex x ∈∖{v_0, v_1}. An immediate consequence of physical laws is that the voltage function is harmonic on except at v_0 and v_1. More formally, we have the following definition. The voltage function associated to the quadruplet (, c, v_0, v_1) is the unique function : → [0,1] such that (v_0) = 0, (v_1) = 1, and (x) = 1/π(x)∑_y ∈(x) c_xy(y) , ∀ x ∈∖{v_0, v_1}. In view of the role that will play in the construction of the Smith embedding, we will also call the height coordinate function. Given an edge e ∈, we say that e is harmonically oriented if (e^+) ≥(e^-). In what follows, unless otherwise specified, we always consider the harmonic orientation of the edges in . It is a remarkable fact that the voltage function admits a representation in terms of a random walk X on (, c). More precisely, if for all v ∈, we define τ_v to be the first hitting time of v from X, then one can easily check that (x) = _xł(τ_v_1 < τ_v_0)̊, ∀ x ∈ . Moreover, since the voltage function is harmonic on ∖{v_0, v_1}, if X^x is a random walk on (, c) started from x ∈ and killed upon reaching the set {v_0, v_1}, then the process (X^x) is a martingale with respect to the filtration generated by X^x. We note that we can canonically lift the voltage function to the lifted weighted graph (^†, c^†) by setting ^† : ^†→ [0, 1] as follows ^†() : = (σ_2 π()) , ∀∈^†. Note that we can naturally extend the definition of the voltage function to a function on the metric graph associated to , i.e., we can define the function : → [0, 1]. This extension can be done by linearly interpolating the values at the endpoints of every edge e ∈. More precisely, if x ∈ is a point lying on the harmonically oriented edge e ∈, then we set (x):=((e^+)-(e^-))^̣(e^-, x) + (e^-) . The flow induced by the voltage function. We finish this subsection by introducing the flow across oriented edges induced by the voltage function . We denote such a flow by ∇: → and we define it as follows ∇(e) := c_e ł((e^+) - (e^-))̊, ∀ e ∈ . The flow ∇ satisfies the following well-known properties (see <cit.>). * (Antisymmetry) For every oriented edge e∈, it holds that ∇(-e) = - ∇(e), where -e stands for the edge e endowed with opposite orientation. * (Kirchhoff's node law) For all x ∈∖{v_0, v_1}, it holds that ∑_e ∈(x)∇(e) = 0, where here the orientation of each e ∈(x) is fixed by letting e^- = x. * (Kirchhoff's cycle law) For every directed cycle e_1 ⋯ e_n, it holds that ∑_i = 1^n1/c_e_i∇(e_i) = 0 . We denote the strength of the flow ∇ induced by by setting η := ∑_e ∈^↑(v_0)∇ (e), where ^↑(v_0) denotes the set of harmonically oriented edges in with tails equal to v_0. Furthermore, thanks to the harmonicity of , a simple computation yields that η = ∑_e ∈^↓(v_1)∇(e), where ^↓(v_1) denotes the set of harmonically oriented edges in with heads equal to v_1. §.§ Discrete harmonic conjugate Let (, c, v_0, v_1) be a doubly marked finite weighted planar graph properly embedded in the infinite cylinder _2π according to Definition <ref>, and let (, ĉ) be the associated weighted dual planar graph properly embedded in _2π according to Definition <ref>. Moreover, let : → [0, 1] be the voltage function associated with (, c, v_0, v_1) as defined in Definition <ref>. We want to define the discrete harmonic conjugate function of , i.e., the function defined on the set of dual vertices that satisfies the discrete Cauchy–Riemann equation. More formally, for every directed edge e ∈ and its corresponding oriented dual edge ê∈, the function should satisfy the following identity ∇(ê) := ĉ_êł((ê^+) - (ê^-))̊ = (e^+) - (e^-), where we recall that ĉ_ê = 1/c_e. To precisely define the function specified above, it will be more convenient to work with the lifted weighted graph (^†, c^†) and its dual (^†, ĉ^†). More precisely, we consider the lifted voltage function ^† : ^†→ [0, 1]. We fix an arbitrary vertex _0 ∈^† on the lifted dual graph, and for every ∈^† we consider a directed path of lifted dual edges _1 ⋯_n connecting _0 to . We recall that we are assuming that the two marked vertices v_0 and v_1 do not lie on the boundary of the same face (see Definition <ref>). In particular, this implies that there is a path of dual edges in that disconnects v_0 from v_1. Hence, the lifted dual graph ^† is connected and we can always find a path connecting _0 to any ∈^†. We define the function ^†:^†→ by setting 𝔴^†() : = ∑_j=1^n∇^†(_j), ∀∈^†. where _j ∈^† is the oriented primal edge associated to _j. We call the function ^† defined in this way the lifted discrete harmonic conjugate function associated to ^† with base vertex _0. The following lemma guarantees that ^† is actually well-defined. For all ∈^†, the value ^†() defined in (<ref>) does not depend on the choice of the directed path from _0 to . Moreover, for any _1, _2 ∈^† such that σ_2π(_1) = σ_2π(_2), the following relation holds ^†(_1) - ^†(_2)/η = (_1) - (_2)/2 π, where we recall that η denotes the strength of the flow induced by as defined in (<ref>). For the first part of the lemma, the proof is similar to that of <cit.>. In particular, it is sufficient to prove that for any oriented loop _1 ⋯_n in ^†, it holds that ∑_j = 1^nł(^†(^+_j) - ^†(^-_j))̊= 0 . The key observation in <cit.> is that every oriented loop _1 ⋯_n in ^† can be written as the disjoint union of simple closed loops and of paths of length two consisting of a single dual edge traversed in both directions. Here, by a simple closed path we mean that _j^+≠_k^+ for distinct j, k ∈ [n] when n > 2, while when n = 2, we mean that _1 ≠ - _2. Therefore, since (<ref>) obviously holds if the path consists of a single dual edge traversed in both directions, we can assume without loss of generality that _1 ⋯_n is a simple counter-clockwise oriented closed loop. Let K ⊂^2 be the bounded connected component of ^2 ∖_1 ⋯_n. Then, thanks to (<ref>), it holds that ∑_j = 1^nł(^†(^+_j) - ^†(^-_j))̊ = ∑_j = 1^n∇^†(_j) = ∑_∈^†(K)∑_∈^†()∇^†(), where here the orientation of each edge ∈^†() is fixed by letting ^- =. The second equality in (<ref>) follows from the following argument. Fix ∈^†(K) and consider ∈^†(). If ∉K, then = _j for some j ∈ [n], while if ∈ K then ∇^†() cancels out with ∇^†() thanks to the antisymmetry of ∇^†. The term on the right-hand side of (<ref>) is equal to zero thanks to the boundedness of K and Kirchhoff's node law (<ref>). Concerning the second part of the lemma, we can proceed as follows. Let _1, _2 ∈^† be such that σ_2π(_1) = σ_2π(_2), and let k ∈ be such that k = ((_2) - (_1))/(2π). Then we need to prove that ^†(_2) - ^†(_1) = η k. In particular, thanks to the first part of the lemma, it is sufficient to consider an arbitrary directed path _1 ⋯_n in ^† connecting _1 to _2 and show that ∑_j = 1^n (^†(^+_j) - ^†(^-_j)) = η k. We assume first that k = 1. We can choose the lifted dual edges [_n] in such way that _1 ⋯_n is a simple path oriented from left to right. Now, using an argument similar to the one used above, it is not difficult to see that ∑_j = 1^n ∇^†(_j) = η, and so the conclusion follows in this case. Finally, the general case can be obtained easily: we can just “glue” together, by eventually changing the orientation if k is negative, k copies of the path used in the case k=1. From the definition (<ref>) of the function ^†, it follows that for every oriented edge ∈^† and for the associated dual edge ∈^†, it holds that ∇^†() := ĉ^†_(^†(^+) - ^†(^-)) = ^†(^+) - ^†(^-), i.e., ^† satisfies the discrete Cauchy–Riemann equation. Moreover, an immediate application of Kirchhoff's cycle law (<ref>) implies that the function ^† is harmonic on ^†. Thanks to Lemma <ref>, we can define the discrete harmonic conjugate function of as follows. The discrete harmonic conjugate function of with base vertex x̂_0 ∈ is the unique function : →/ η such that (x̂_0) = 0 and (x̂) = ^†() η, ∀x̂∈ , where ^†: ^†→ is the function defined in (<ref>) with base vertex an arbitrary lift of x̂_0. In view of the role that will play in the construction of the Smith embedding, we will also call the width coordinate function. As for the case of the voltage function , we can naturally extend the definition of to a function from the dual metric graph , i.e., : →/η. To be precise, if x̂∈ is a point on the edge ê∈ and (ê^+) ≥(ê^-), then we set (x̂):=((ê^+)-(ê^-))^̣(ê^-, x̂) + (ê^-) . §.§ Construction of the Smith embedding Let (, c, v_0, v_1) be a doubly marked finite weighted planar graph properly embedded in the infinite cylinder _2π according to Definition <ref>, and let (, ĉ) be the associated weighted dual planar graph properly embedded in _2π according to Definition <ref>. The aim of this subsection is to precisely define the Smith embedding of (, c, v_0, v_1). As we have already explained in the introduction, the Smith embedding is built in terms of a tiling of a finite cylinder with rectangles in which every edge e ∈ corresponds to a rectangle in the tiling, every vertex x ∈ corresponds to the maximal horizontal segment tangent with all rectangles corresponding to the edges incident to x, and every dual vertex x̂∈ corresponds to the maxiaml vertical segment tangent with all rectangles corresponding to primal edges surrounding x̂. The existence of such tiling was first proven in <cit.> and then successively extended in <cit.>. The main objects. To precisely define the Smith embedding, it will be more convenient to work with the lifted weighted graph (^†, c^†) and its dual (^†, ĉ^†). More precisely, we need to consider the lifted voltage function ^†:^†→ [0, 1] and its lifted discrete harmonic conjugate function ^†:^†→. For every edge ∈^†, consider its harmonic orientation and let ∈^† be the corresponding oriented dual edge. We define the intervals _ := ł[^†(^-), ^†(^+)]̊, _ := ł[^†(^-), ^†(^+)]̊. Then, we define the rectangle _ associated to the edge ∈^† by letting _ := _×_⊂× [0,1], ∀∈^†. Recalling the definition (<ref>) of the lifted discrete harmonic conjugate function ^†, it holds that ^†(^+) - ^†(^-) = c_ (^†(^+) - ^†(^-)). Therefore, the aspect ratio of the rectangle _ is equal to the conductance c_ of the edge ∈^†. In particular, this implies that if an edge ∈^† has unit conductance, then _ is a square. For a vertex ∈^†, we define the closed horizontal line segment _ by setting _ := ⋃_∈^†, ↓()_×{^†()}⊂× [0,1], ∀∈^†, where ^†, ↓() denotes the set of harmonically oriented lifted edges with heads equal to . Finally, for a dual vertex ∈^†, we define the closed vertical line segment _ by letting _ := ⋃_∈^†, ↓(){^†()}×_⊂× [0,1], ∀∈^†, where ^†, ↓() denotes the set of harmonically oriented lifted dual edges with heads equal to . Thanks to the harmonicity of the lifted height coordinate function, we observe that in the definition of _, one can replace ^†, ↓() with ^†, ↑(x), and similarly for _. Construction of the tiling. We recall that η denotes the strength of the flow induced by as defined in (<ref>). We consider the cylinder _η := /η× [0, 1], where /η denotes the circle of length η. We let (× [0, 1], σ_η) be the universal cover of _η, where the covering map σ_η : × [0, 1] →_η is defined by σ_η(t, x) := ł(e^i 2 π t/η, x)̊, ∀ (t, x) ∈× [0, 1] . We are now ready to define the tiling map. For each e ∈, x ∈, and x̂∈, we define the following objects _e := σ_η(_), _x := σ_η(_), _x̂ := σ_η(_) , where ∈^†, ∈^†, and ∈^† are lifts of e, x, and x̂, respectively. An immediate consequence of Lemma <ref> is that _e, _x, and _x̂ are well-defined, i.e., they do not depend on the particular choice of the lifts , and . The following properties are well-known (see <cit.>). * The collection of rectangles {_e}_e ∈ constitutes a tiling of /η× [0, 1], i.e., for each pair of distinct edges e_1, e_2 ∈, the interiors of the rectangles _e_1 and _e_2 are disjoint and ∪_e ∈_e =/η× [0, 1]. * For each two distinct edges e_1, e_2 ∈, the interiors of the vertical sides of the rectangles _e_1 and _e_2 have a non-trivial intersection only if e_1 and e_2 both lie in the boundary of some common face of . * Two rectangles intersect along their horizontal (resp. vertical) boundaries if and only if the corresponding primal (resp. dual) edges share an endpoint. We note that if e ∈ is such that no current flows through it, i.e., (e^-) = (e^+), then the corresponding rectangle _e is degenerate and consists only of a single point. We also remark that the existence of the aforementioned tiling was proven by Benjamini and Schramm in <cit.>. Originally, their proof was stated specifically for the case of edges with unit conductance, however, it can be readily extended to our setting. The tiling map associated to the quadruplet (, c, v_0, v_1) is the map : ∪∪→/η× [0,1] such that (e): = _e, ∀ e∈; (x) := _x, ∀ x ∈; (x̂): = _x̂, ∀x̂∈, where _e, _x, and _x̂ are as defined in (<ref>). The image of the tiling map is called the Smith diagram associated to (, c, v_0, v_1). We refer to Figure <ref> for an illustration of the Smith diagram associated to a given quadruplet (, c, v_0, v_1). Since the height coordinate function can be extended to the metric graph , we can view each rectangle _e of the tiling as being foliated into horizontal segments, one for each inner point of the corresponding edge e ∈. Similarly, since the width coordinate function can be extended to the dual metric graph , we can also view each rectangle _e of the tiling as being foliated into vertical segments, one for each inner point of the corresponding dual edge ê∈. It will be also convenient to introduce the lifted tiling map associated to (, c, v_0, v_1) which is the map ^† : ^†∪^†∪^†→× [0,1] such that ^†() := _ for each ∈^†, ^†(): = _ for each ∈^†, and ^†(): = _ for each ∈^†. We emphasize that, since the collection {R_e}_e ∈ forms a tiling of the cylinder /η× [0, 1], the collection of rectangles {R_}_∈^† forms a periodic tiling of × [0,1] of period η. We are now ready to precisely define the Smith embedding. The Smith embedding associated to the quadruplet (, c, v_0, v_1) is the function : →/η× [0,1] such that (x) =(_x), ∀ x ∈, where (_x) denotes the middle point of the horizontal line segment (x). Moreover, we define the lifted Smith embedding ^† :^†→× [0, 1] as the map that assigns to each ∈^† the middle point of the horizontal line segment ^†(). We emphasize once again that the choice to define the Smith embedding by picking the middle point of each horizontal line segment is rather arbitrary. Indeed, the main result of this paper holds also if one chose an arbitrary point inside each horizontal segment. Finally, for technical reasons, we also need to introduce the following map. We define the map ^†, rand that assigns to each vertex ∈^† the random variable ^†, rand() which is uniformly distributed on the horizontal line segment ^†(). § SOME PROPERTIES OF THE SMITH EMBEDDING In this section, we collect some results that follow directly from the construction of the Smith embedding. We fix throughout this section a doubly marked finite weighted planar graph (, c, v_0, v_1) properly embedded in the infinite cylinder _2π according to Definition <ref>, and we also consider the associated weighted dual planar graph (, ĉ) properly embedded in _2π according to Definition <ref>. In what follows, we consider the metric graph associated to , and we let : → [0, 1] be the extended height coordinate function as specified in Remark <ref>. Furthermore, we also consider the dual metric graph , and we let : →/η be the extended width coordinate function as specified in Remark <ref>. §.§ Adding new vertices In this subsection, we see how declaring a finite amount of interior points of some edges in the graph to be vertices affects the height coordinate function and the random walk on the graph. More precisely, we start with the following definition. Let W ⊂ be a finite subset of the metric graph. We define the weighted planar graph (', c') associated to (, c) and W as follows: * The set of vertices ' is given by ∪ W; * If the interior of an edge e ∈ contains n ∈ points of W, then e is split into n+1 new edges [e'_n+1] according to the points in the interior of e. The edge e remains unchanged otherwise. * If the interior of an edge e ∈ is split into [e'_n+1] new edges, for some n ∈, then we set c'_e'_i := c_e/^̣ł(e_i^', -, e_i^', +)̊, ∀ i ∈ [n+1]. The conductance of e remain unchanged otherwise. The weighted dual graph (', ĉ') can be naturally constructed from (', c'). At the level of the Smith diagram, adding new vertices to the interior of some edges of the graph according to the procedure described above corresponds to horizontally dissecting the rectangles associated to such edges. More precisely, let us assume for simplicity that only one point is added to the the interior of an edge e, and let e_1' and e_2' be the new edges in which e is split into. Suppose that e_1^', - = e^- and e_2^', + = e^+. Let ' be the tiling map associated to new weighted graph. Then it is immediate to check that (e) = '(e_1') ∪'(e_2'). In particular the rectangles '(e_1') and '(e_2') have the same width of the rectangle (e), and the height of '(e_1') is proportional to ^̣(e_1^', -, e_1^', +), while that of '(e_2') is proportional to ^̣(e_2^', -, e_2^', +). Let ': ' → [0, 1] be the voltage function associated to the quadruplet (', c', v_0, v_1). The following lemma relates the function with '. For every x ∈, it holds that (x) = '(x). We denote by K := #{' ∖} the number of new vertices added to the graph '. If K = 0, then the result follows immediately. Let us now assume that K = 1 and suppose that the edge e ∈ is split into two new edges e_1' and e_2' with conductances c'_e_1' and c'_e_2' as specified in Definition <ref>. Then, the desired result follows immediately from the series law of electrical network (cf. <cit.>). The general case follows from a simple induction argument on K. For x ∈, let X^', x be the random walk on (', c') started from x. Let τ_0 := 0 and, for every k ∈_0, we define inductively τ_k+1 := inf{j > τ_k : X^', x_j ∈}. Then {X_τ_k^', x}_k ∈ has the same distribution as the random walk on (, c) started from x. As in the proof of Lemma <ref>, let K := #{' ∖}. If K = 0, then the result is obvious. Let us now assume that K = 1 and let y ∈ W be the vertex added to '. We claim that the distribution of X^', x_τ_1 is equal to that of X^x_1. If all the edges in (x) do not contain y, then the claim is obvious. Let us assume that there exists an edge in (x) which contains y. Then, thanks to an easy computation, one can verify that _x(X'_τ_1 = v) = c_xv/π(x), ∀ v ∈(x). Therefore, thanks to the strong Markov property of X^', x, we get that for all k ≥ 2, it holds that _x(X'_τ_k = v | X'_τ_k-1 = w) = c_v w/π(w), v ∈(w) 0 , otherwise, ∀ k ≥ 2, which proves the result if K=1. The general case follows from a simple induction argument on K. Given a finite set Ŵ⊂, following a similar procedure to the one described above, one can also construct the weighted dual graph (', ĉ') associated to (, c) and Ŵ. In particular, results similar to the one stated above hold also for the respective dual counterparts. For example, if ': ' →/η is the width coordinate function associated to the new weighted graph, then ' restricted to coincides with the original width coordinate function. Moreover, adding new dual vertices to the interior of some dual edges corresponds to vertically dissecting the associated rectangles in the Smith diagram. §.§ Periodicity We collect here some properties of the lifted Smith diagram that are due to its periodicity. We recall that the map ^†, rand is defined in Definition <ref>. Let _1, _2 ∈^† be such that σ_2 π(_1) = σ_2 π(_2). Then, uniformly over all possible realizations of ^†, rand(_1) and of ^†, rand(_2), it holds that |(^†, rand(_2)) - (^†, rand(_1))/η - (_2)- (_1)/2 π| ≤ 1. We observe that the proof of this result is an easy consequence of Lemma <ref>. However, it does not follow immediately from Lemma <ref> since the points ^†, rand(_1) and ^†, rand(_2) are points chosen uniformly at random from the horizontal segments ^†(_1) and ^†(_2), respectively. We give here the details for completeness. Fix a realization of ^†, rand(_1) and of ^†, rand(_2). Let ^†, ↑(_1) = [^1_n] be the set of lifted harmonically oriented edges with tails equals to _1 ordered in such a way that ^1_1⋯^1_n forms a path in ^† oriented from left to right. Define ^†, ↑(_2) = [^2_n] in a similar way. Then, by construction of the map ^†, rand, it holds that (^†, rand(_1)) ∈ł[^†(^1, -_1) , ^†(^1, +_n)]̊, (^†, rand(_2)) ∈ł[^†(^2, -_1) , ^†(^2, +_n)]̊. Moreover, by Lemma <ref>, we have that ^†(^1, -_1) - ^†(^2, -_1)= η/2π((_1) - (_2)), ^†(^1, +_n) - ^†(^2, +_n) = η/2π((_1) - (_2)), and also ł|^†(^1, +_n) - ^†(^1, -_1)|̊ = ł|^†(^2, +_n)- ^†(^2, -_1)|̊≤η, where the first equality is again due to Lemma <ref>, and the last inequality is due to the fact that, by construction of the Smith diagram, every horizontal line segment has width at most η. Therefore, putting together all these facts, one can readily obtain the desired result. In order to state and prove the next lemma of this subsection, we need to introduce some notation. We assume it exists a ∈ (0, 1) such that all the points in the set ^-1(a) are vertices in . We define the set of harmonically oriented edges _a and the corresponding set of dual edges _a as follows _a := ł{e ∈ : (e^-) ≤ a < (e^+)}̊, _a := ł{ê∈ : ê is the dual of some e ∈_a }̊, and we define the following sets of vertices _a := ł{x ∈ : x = e^- for some e ∈_a}̊, _a := ł{x̂∈ : x̂ = ê^- for some ê∈_a}̊. Furthermore, we denote by _a^†, ^†_a, _a^†, _a^† the lifts to the universal cover of the corresponding objects. Now, we let ^a_0 be the vertex in ^†_a whose real coordinate is nearest to 0. We note that the set _a is a cutset and that the associated set of oriented dual edges _a admits an enumeration [ê_n] such that ê_1 ⋯ê_n forms a counter-clockwise oriented noncontractible simple closed loop in the dual graph . In particular, every edge ê_i admits a lift _i such that _1 ⋯_n forms a simple path oriented from left to right joining ^a_0 to the shifted vertex ^a_0 + (0, 2π) (see Lemma <ref>). Letting [_n] be as specified above, we define the set of lifted dual vertices _a^† := ł{∈^†_a : = _i^- for some i ∈ [n] }̊, and we also define the set of lifted vertices _a^† := ł{∈ (^†)^-1(a) : = _i^- for some i ∈ [n] }̊ , where [_n] is the set of lifted edges associated to [_n]. Consider the infinite strip ^a_2π := ł[(_0^a), (_0^a) + 2π)̊×. We remark that in general the set _a^† is not fully contained in ^a_2π. We also observe that, thanks to the fact that _a is a cutset, it holds that σ_2π(_a^†) = ^-1(a) and #σ_2π(_a^†) = #^-1(a). We are now ready to state and prove the next lemma of this subsection. We refer to Figure <ref> for a diagram illustrating the various objects involved in the proof of Lemma <ref>. Fix a ∈ (0, 1) and assume that all the points in ^-1(a) are vertices in . Let ^a_0 ∈_a^† be as specified above, then it holds that 0 ≤^†() - ^†(^a_0) ≤η, ∀∈_a^†, and also, uniformly over all possible realizations of ^†, rand(·), it holds that 0 ≤(^†, rand()) - ^†(^a_0) ≤η, ∀∈_a^†. We start by proving the first part of the lemma. We consider the set of dual edges [_n] such that _1 ⋯_n forms a simple path oriented from left to right joining ^a_0 to the shifted vertex ^a_0 + (0, 2π), as specified before the lemma statement. Thanks to Lemma <ref>, we know that ∑_i = 1^n ∇^†(_i) = η, and each summand in the sum is non-negative. Now, let ∈_a^†, then there exists a subpath ^_1 ⋯^_k of _1 ⋯_n which connects ^a_0 to . Therefore, we have that ^†() - ^†(^a_0) = ∑_i = 1^k ∇^†(^_i), and the conclusion then follows thanks to the fact that 0 ≤∑_i = 1^k ∇^†(^_i) ≤∑_i = 1^n ∇^†(_i) = η. We now prove the second part of the lemma. To this end, we fix ∈_a^† and a possible realization of ^†, rand(). We let ^†, ↑() = [^_k] be the set of lifted harmonically oriented edges with tails equal to ordered in such a way that the corresponding lifted dual edges ^_1⋯^_k forms a path in ^† oriented from left to right (see Figure <ref>). By construction of the map ^†, rand, it holds that (^†, rand()) ∈ł[^†(^,-_1), ^†(^, +_k)]̊. Finally, since both ^,-_1 and ^, +_k belong to _a^†, the conclusion follows thanks to the first part of the lemma. §.§ Hitting distribution of a horizontal line Roughly speaking, the main goal of this subsection is to characterize the hitting distribution of a horizontal line on the Smith diagram for the Smith embedded random walk. Before precisely stating the result, we need to introduce some notation. We define : →[0, η) as the function that assigns to each vertex x ∈ the length of the horizontal segment (x) associated to x by the Smith embedding. More precisely, we define (x) := ∑_∈^↓(x)∇(e), ∀ x ∈, where ^↓(x) denotes the set of harmonically oriented edges in with heads equal to x. We recall that the difference of the width coordinate function between the endpoints of a dual edge is equal to the gradient of the height coordinate of the corresponding primal edge. Moreover, thanks to the fact that the height coordinate function is harmonic, it is plain to see that (x) = ∑_∈^↑(x)∇(e), ∀ x ∈, where ^↑(x) denotes the set of harmonically oriented edges in with tails equal to x. We can also naturally extend the definition of the length function to the metric graph . More precisely, given a point x ∈∖, if x lies in the interior of the edge e ∈, then we set (x):= ∇(e). Given a value a ∈ (0, 1), we define on the set ^-1(a) ⊂ the measure μ_a by letting μ_a(x) := (x)/η, ∀ x ∈^-1(a). Since, by construction of the Smith diagram, it holds that ∑_x ∈^-1(a)(x) = η, the measure μ_a is a probability measure on the set ^-1(a). We emphasize that, thanks to Remark <ref>, one can also view μ_a as a probability measure on the set _a^†, where we recall that _a^† is the set defined in (<ref>). From now on, we assume throughout the whole subsection that all the points in the set ∪_x ∈^-1((x)) are vertices in . At the level of the Smith diagram, this means that for all x ∈, it holds that the set ∪_y ∈^-1((x))(y) is equal to the noncontractible closed loop /η×{(x)}⊂_η. We observe that, for any given finite weighted planar graph, we can always canonically construct from it another finite weighted planar graph that satisfies this assumption. Indeed, suppose that it exists a vertex x ∈ such that ^-1((x)) ⊄, then, using the procedure described in Subsection <ref>, we can declare all the points in the finite set ^-1((x)) ∖ to be vertices of the graph. We fix a value a ∈ (0, 1) such that all the points in the set ^-1(a) are vertices in , and we let X^μ_a be the random walk on (, c) started from a point in ^-1(a) sampled according to the probability measure μ_a. Let X^μ_a be as specified above. For N ∈, we say that a finite sequence of height coordinates [a_N]_0 ⊂ (0, 1) is admissible for the random walk X^μ_a if a_0 = a and _μ_ał((X_n+1) = a_n+1|(X_n) = a_n)̊ > 0, ∀ n ∈ [N-1]_0. We can now state the main result of this subsection. Let X^μ_a be as specified above. For N ∈, let [a_N]_0 ⊂ (0, 1) be an admissible sequence of height coordinates for the random walk X^μ_a. Then, for all i ∈ [N]_0, it holds that _μ_ał(X_i = x_i |{(X_n) = a_n}_n=1^N)̊ = μ_a_i(x), ∀ x_i ∈^-1(a_i). Intuitively, the proof of the above lemma goes as follows. If i = 1 and we only had the conditioning on the event (X_1) = a_1, then the result would follow from a simple computation. However, since, in general, i ≠ 1, and since we are also conditioning on the events {(X_n) = a_n}_n=1^i-1 and {(X_n) = a_n}_n=i+1^N, which represent the height coordinates of the random walk for past and future times respectively, the proof of the result is not immediate. However, the proof follows by a simple induction argument in which we show that we can “forget” about the conditioning on past and future times. Roughly speaking, the reason why we can forget about past times is due to the fact that the height component of the random walk is itself a Markov process, while the reason why we can forget about future times is due to the harmonicity of the height coordinate function. We now proceed with the proof of the lemma. The proof involves three main steps. Step 1: We start by proving that _μ_ał((X_1) = a_1 | X_0 = x_0) depends only on (X_0) = a_0, for all x_0 ∈^-1(a). Since we are assuming that = ∪_x ∈^-1((x)), at its first step, the random walk can only reach two heights: height a_1 or height ã_1 say. If a_1> a, since is harmonic at x_0, it holds that (x_0) = ∑_e ∈^↓(x_0) c_e (e^-) +∑_e ∈^↑(x_0) c_e (e^+)/π(x_0) = ã_1∑_e ∈^↓(x_0) c_e/π(x_0) + a_1∑_e ∈^↑(x_0) c_e/π(x_0), where, as usual, ^↓(x_0) (resp. ^↑(x_0)) denotes the set of harmonically oriented edges with heads (resp. tails) equal to x_0. Now, we observe that _μ_ał((X_1) = a_1| X_0 = x_0 )̊ = ∑_e ∈^↑(x_0) c_e/π(x_0), _μ_ał((X_1) = ã_1| X_0 = x_0 )̊ = ∑_e ∈^↓(x_0) c_e/π(x_0). In particular, plugging these expressions into (<ref>) and rearranging, we obtain that _μ_ał((X_1) = a_1| X_0 = x_0)̊ = |ã_1 - a_0|/|ã_1 - a_1|, ∀ x_0 ∈^-1(a), from which the desired result follows since the right-hand side of the above expression only depends on (X_0) = a_0. A similar conclusion also holds if a_1 < a. Now, proceeding by induction, one can prove that for any i ∈[N]_0 and for all x_0 ∈^-1(a), …, x_i ∈^-1(a_i), it holds that _μ_a ({(X_i+n) = a_i+n}_n = 1^N-i|{X_j = x_j}_j = 0^i) depends only on (X_i) = a_i. Step 2: Thanks to previous step and Bayes' rule, we have that _μ_ał(X_0 = x_0 |(X_1) = a_1)̊ = μ_a(x_0), for all x_0 ∈^-1(a). In particular, using this fact, we will now prove that _μ_ał(X_1 = x_1|(X_1) = a_1)̊ = μ_a_1(x_1), ∀ x_1∈^-1(a_1). To this end, fix x_1∈^-1(a_1) and suppose that a_1 > a. Then we can proceed as follows _μ_ał(X_1 = x_1|(X_1) = a_1)̊ = ∑_x_0 ∈^-1(a) ∩(x_1)_x_0ł(X_1 = x_1|(X_1) = a_1)̊_μ_ał(X_0 = x_0 |(X_1) = a_1)̊ = ∑_x_0 ∈^-1(a) ∩(x_1)c_x_0 x_1/∑_v ∈^-1(a_1) ∩(x_0) c_x_0 vμ_a(x_0) = ∑_x_0 ∈^-1(a) ∩(x_1)c_x_0 x_1/∑_v ∈^-1(a_1) ∩(x_0) c_x_0 v|a_1 - a| ∑_v ∈^-1(a_1) ∩(x_0) c_x_0 v/η = 1/η∑_e ∈^↓(x)∇(e) = μ_a_1(x), where ^↓(x) denotes the set of harmonically oriented edges with heads equal to x. In order to pass from the second line to the third line of the above display, we used the definition of the probability measure μ_a, and the fact that, for all e ∈^↓(x), it holds that (e^-) = a and (e^+) = a_1. One can proceed similarly if a_1 < a. Now, proceeding by induction, one can easily prove that, for all i ∈[N], it holds that _μ_ał(X_i = x_i |{(X_n) = a_n}_n = 1^i)̊ = μ_a_i(x_i), ∀ x_i ∈^-1(a_i). Step 3: For i ∈ [N]_0, we observe that a consequence of Step 1 is that the sequence of random variables {(X_i+n)}_n = 1^N-i is conditionally independent from {X_j}_j = 0^i given (X_i) = a_i. Therefore, the conditional law of {X_j}_j = 0^i given {(X_n) = a_n}_n = 1^i is the same as the conditional law given {(X_n) = a_n}_n = 1^N. Hence, this implies that _μ_ał(X_i = x_i |ł{(X_n) = a_n}̊_n=1^N)̊ = _μ_ał(X_i = x_i |ł{(X_n) = a_n}̊_n=1^i)̊, ∀ x_i ∈^-1(a_i), and so the conclusion follows from Step 2. §.§ Expected horizontal winding Roughly speaking, the main goal of this subsection is to establish that the average winding that the Smith-embedded random walk does before hitting a given horizontal line on the Smith diagram is zero. Before precisely stating the result, we need to introduce some notation. Given the random walk ^ on the lifted weighted graph (^†, c^†) started from ∈^†, we let _0 ∋ n ↦^_n be the discrete time process taking values in × [0, 1] such that, for each n ∈_0, the conditional law of ^_n, given ^_n, is equal to the law of ^†, rand(_n^), where we recall that ^†, rand is defined in Definition <ref>. We call the process ^ the lifted Smith-embedded random walk associated to ^. It follows from the definition of ^ that, at each time step n ∈_0, the location of the point ^_n is sampled uniformly at random and independently of everything else from the horizontal line segment ^†(^_n). With a slight abuse of notation, we also denote by ^ the continuous time version {^_t}_t ≥ 0, where the continuous path is generated by piecewise linear interpolation at constant speed. We assume also in this subsection that all the points in the set ∪_x ∈^-1((x)) are vertices in . Furthermore, we fix a value a ∈ (0, 1) such that all the points in the set ^-1(a) are vertices in , and we let X^μ_a be the random walk on (, c) started from a point in ^-1(a) sampled according to the probability measure μ_a defined in Definition <ref>. We also adopt the convention to denote by _0 ∋ n ↦^μ_a_n the lift of the random walk X^μ_a to the lifted weighted graph (^†, c^†) started from a point in _a^† sampled according to the the probability measure μ_a (see Remark <ref>). Moreover, we denote by ^μ_a the lifted Smith-embedded random walk associated to ^μ_a as specified above. In complete analogy with the definition of winding of a path in the infinite cylinder _2π, we have the following definition of winding on the cylinder _η. Let 0 ≤ t_1 < t_2, consider a path P :[t_1, t_2] →_η, and let :[t_1, t_2] →× [0, 1] be its lift to the covering space × [0, 1]. Then, we define the winding of P as follows _η(P) =((t_2)) - ((t_1))/η. We are now ready to state the main result of this subsection. Let X^μ_a and ^μ_a be as specified above. For N ∈, let [a_N]_0 ⊂ (0, 1) be an admissible sequence of height coordinates for the random walk X^μ_a as specified in Definition <ref>. Then it holds that _μ_ał[_ηł(|_[0, N])̊|ł{(X_n) = a_n }̊_n = 1^N]̊ = 0. Heuristically speaking, the proof of the above lemma goes as follows. First, we observe that, by definition of winding, it holds that _η(|_[0, N]) = ∑_j = 0^N-1_η(|_[j, j+1]). In particular, this implies that we can reduce the problem to studying the expected winding at each time step of the random walk. Suppose for a moment that N=1 in the lemma statement. Then we only need to prove that the random walk started uniformly at random from height a and conditioned to hit height a_1 at its first time step has zero expected winding. The reason why this is true is due to the fact that the re-randomization procedure that takes place inside each small segment ensures that the conditional hitting distribution of the segment at height a_1 is also uniformly distributed. To extend the argument to the general case N ∈, we just need to use Lemma <ref> in a suitable way. We divide the proof in two steps. Step 1: The main goal of this first step is to find an equivalent condition for the relation stated in the lemma to be true. To this end, we start by observing that, thanks to the Definition <ref> of winding, it is holds that _μ_ał[_ηł(|_[0, N])̊|ł{(X_n) = a_n}̊_n = 1^N]̊ = 1/η∑_j = 0^N-1_μ_ał[(_j+1) - (_j) |ł{(X_n) = a_n}̊_n = 1^N]̊. Now, we claim that _μ_ał[(_j+1) |ł{(X_n) = a_n}̊_n = 1^N]̊ = _μ_ał[(_j+1) |ł{(X_n) = a_n}̊_n = j^j+1]̊, ∀ j ∈ [N-1]_0. To show that this relation holds true, we first notice that the left-hand side of (<ref>) can be written as follows _μ_ał[(_j+1) |ł{(X_n) = a_n}̊_n = 1^N]̊ = ∑_x ∈^-1(a_j+1)_μ_ał[(_j+1) | X_j+1 = x ]̊_μ_ał(X_j+1 = x |ł{(X_n) = a_n}̊_n = 1^N)̊ = ∑_x ∈^-1(a_j+1)_μ_ał[(_j+1) | X_j+1 = x ]̊μ_a_j+1(x), where in the last equality we used Lemma <ref>. Similarly, one can also obtain the same expression for the right-hand side of (<ref>). To do so, one can simply condition on all possible height trajectories that join height a to height a_j in j steps, and then proceed in the same exact way as above. In a similar way, one can also prove that _μ_ał[(_j) |ł{(X_n) = a_n}̊_n = 1^N]̊ = _μ_ał[(_j) |(X_j) = a_j]̊, ∀ j ∈ [N-1]_0. Therefore, combining (<ref>) and (<ref>) and going back to (<ref>), we showed that the proof of the lemma follows if we prove that the following equality holds _μ_ał[(_j+1) |ł{(X_n) = a_n}̊_n = j^j+1]̊ = _μ_ał[(_j) |(X_j) = a_j]̊, ∀ j ∈ [N-1]_0. Now, for every j ∈ [N-1]_0, we let X^j be the random walk on the weighted graph (, c) started from a point in ^-1(a_j) sampled according to the probability measure μ_a_j. As one can easily verify, a consequence of Lemma <ref> is that the conditional law of X^μ_a, given (X_j^μ_a) = a_j, is equal to the law of X^j. In particular, this implies that (<ref>) is equivalent to the fact that _μ_a_jł[(^j_1) |(X^j_1) = a_j+1]̊ - _μ_a_jł[(^j_0)]̊ = 0, ∀ j ∈ [N-1]_0, where ^j denotes the lift of X^j started from a point in ^†_a_j, and ^j is the lifted Smith-embedded random walk associated to ^j. Let us also emphasize that here we are relying on the fact that the definition of winding does not depend on the particular choice of the lift. Step 2: The main goal of this second step is to prove that the equality in (<ref>) holds. To this end, we observe that, since every horizontal segment in the Smith diagram has length at most η, |(^j_1) - (^j_0)| ≤η. In particular, this implies that (<ref>) can be rewritten as follows _μ_a_jł[(σ_η(^j_1)) |(X^j_1) = a_j+1]̊ - _μ_a_jł[(σ_η(^j_0))]̊ = 0, ∀ j ∈ [N-1]_0, where we recall that σ_η is defined in (<ref>) and denotes the covering map of the cylinder _η. By construction, the random variable (σ_η(^j_0)) is uniformly distributed on the interval [0, η), and so the result follows if we prove that also the conditional law of (σ_η(^j_1)), given (X^j_1) = a_j+1, is uniformly distributed on the interval [0, η). To this end, let ⊂ [0, η) be a fixed measurable set, then we have that _μ_a_jł((σ_η(^j_1)) ∈|(X^j_1) = a_j+1)̊ = ∑_x ∈^-1(a_j+1)_μ_a_jł((σ_η(^j_1)) ∈| X^j_1 = x )̊_μ_a_jł(X^j_1 = x |(X^j_1) = a_j+1)̊ = ∑_x ∈^-1(a_j+1)_μ_a_jł((σ_η(^j_1)) ∈| X^j_1 = x )̊μ_a_j+1(x) = ∑_x ∈^-1(a_j+1)ł(∩((x)))̊|/(x)(x)/η = ł()̊/η, where in the second equality we used Lemma <ref>, while in the third equality we used the fact that the conditional law of σ_η(^j_1), given X_1^j = x, is uniformly distributed on the interval (x) and the definition of the probability measure μ_a_j+1. Since this prove (<ref>), which is equivalent to (<ref>), the proof of the lemma is concluded. § PROOF OF THE MAIN RESULT The main goal of this section is to prove Theorem <ref>. To this end, we fix a sequence {(^n, c^n, v_0^n, v_1^n)}_n ∈ of doubly marked finite weighted planar maps and we let {(^n, ĉ^n)}_n ∈ be the sequence of associated weighted dual planar graphs. We assume throughout this section that such sequences satisfy hypotheses <ref>, <ref>, <ref>. Before moving to the proof of the main theorem, we prove a couple of simple results which are immediate consequences of our assumptions and which will be useful later on. In particular, the next lemma is basically an analogue of assumptions <ref> and <ref> in the setting of the universal cover. For each n ∈, view the embedded random walk on the lifted weighted graph (^†, n, c^†, n), stopped when it traverses an unbounded edge, as a continuous curve in ^2 obtained by piecewise linear interpolation at constant speed. For each R > 0 and for any z ∈× [-R, R], the law of the random walk on (^†, n, c^†, n) started from the vertex _z^n ∈^†, n nearest to z weakly converges as n →∞ to the law of the Brownian motion in ^2 started from z with respect to the local topology on curves viewed modulo time parameterization specified in Subsection <ref>, uniformly over all z ∈× [-R, R]. Furthermore, the same result holds for the random walk on the sequence of lifted weighted dual graphs {(^†, n, ĉ^†, n)}_n ∈ N. Obviously, we can just prove the result for the random walk on the sequence of primal graphs as the result for the random walk on the sequence of dual graphs can be proved in the same exact way. Fix R > 0 and z ∈× [-R, R], and let ^n, z be the continuous time random walk on (^†, n, c^†, n) started from _z^n ∈^†, n as specified in the lemma statement. We let τ_0 := 0 and for k ∈_0 we define inductively τ_k+1 := infł{t ≥τ_k : (X_t) ∉[(X_τ_k) - π, (X_τ_k) + π)}̊. For each k ∈_0, we observe that the universal covering map σ_2π: ^2 →_2π is a biholomorphism when restricted to [(X_τ_k) - π, (X_τ_k) + π). Moreover, by assumption <ref>, we know that the law of σ_2π(^n, z) weakly converges as n →∞ to the law of the Brownian motion in _2π with respect to the metric specified in Subsection <ref>. Therefore, since Brownian motion is conformally invariant and putting together the previous facts, we obtain that the law of the random walk ^n, z weakly converges as n →∞ to the law of the Brownian motion in ^2 with respect to the metric on the time interval [τ_k, τ_k+1], for each k ∈_0. Hence, the desired result follows. For each n ∈, we recall that ^n denotes the set of faces of the graph ^n. The next lemma states that, thanks to the invariance principle assumption on the sequence of weighted graphs, the maximal diameter of the faces in ^n which intersect a compact subset of _2π is of order o_n(1). Let R > 0, and, for any n ∈, consider the set ^n(R) of faces in ^n that intersect /2π× [-R, R]. Then, for any > 0 and for any n>n(R, ) large enough, it holds that (f) ≤, ∀ f ∈^n(R). The same result holds also for the sequence of dual graphs, i.e., with ^n in place of ^n. Obviously, we can just prove the result for the sequence of primal graphs as the result for the sequence of dual graphs can be proved in the same exact way. We proceed by contradiction by assuming that it exists > 0 such that, for any n ∈, it exists a face f_n ∈^n(R) for which (f_n) >. We notice that each set f_n ∩/2π× [-2R, 2R] is compact and contains a path P_n with (P_n) >. By compactness of the Hausdorff distance , we can assume, by possibly taking a subsequential limit, that lim_n →∞(P_n, P) = 0 for some compact and connected set P ⊂/2π× [-2R, 2R]. Now, choose a rectangle such that P disconnects the left and right sides of , or the top and bottom sides of . For any n>n(R, , ) large enough, also the path P_n disconnects the rectangle in the same way as P. Therefore, for any x ∈/2π× [-R, R], the random walk X^n, x on the weighted graph (^n, c^n) started from x cannot cross the rectangle from left to right, or from top to bottom. However, the Brownian motion B^x on _2π started from x has positive probability to do so. This is in contradiction with assumption <ref>, and so the desired result follows. We are now ready to start the proof of Theorem <ref>. We observe that such a result admits a natural version on the sequence of lifted weighted graphs. Indeed, in order to prove it, we will work in the setting of the universal cover. To be more precise, we will first study in Subsection <ref> the sequence of lifted height coordinate functions, and then, in Subsection <ref>, we will study the sequence of lifted width coordinate functions. Once this is done, we will put everything together and we will conclude the proof of Theorem <ref> in Subsection <ref>. §.§ Height coordinate For each n ∈, consider the lifted height coordinate function ^†_n:^†, n→ [0,1], as defined in (<ref>). The main goal of this subsection is to show that there exists an affine transformation of the function ^†_n that is asymptotically close, uniformly on lifts of compact subsets of the infinite cylinder _2π, to the height coordinate function ↦() in the a priori lifted embedded graph ^†, n, as n →∞. More precisely, we have the following result. There exit two sequences {b^_n}_n ∈, {c^_n}_n ∈⊂ such that for all R > 1, δ∈ (0, 1), and for any n > n(R, δ) large enough, it holds that ł|c_n^^†_n() + b_n^ - ()|̊≤δ, ∀∈^†, n(× [-R, R]). The proof of this proposition is postponed to Subsection <ref>. As we will see, the proof will follow easily thanks to the harmonicity of the height coordinate function together with Lemma <ref> below. §.§.§ Setup of the proof For each n ∈, consider the metric graph ^n associated to ^n, and let ^n : ^n → [0, 1] be the extended height coordinate function as specified in Remark <ref>. Given a value S ∈, we define the set V^n_S := ł{x ∈^n : (x) = S}̊, and we let a̅_S^n :=sup{a ∈ (0, 1) : ^-1_n(a) ∩ V^n_S ≠∅}, a_S^n := inf{a ∈ (0, 1) : ^-1_n(a) ∩ V^n_S ≠∅}. We fix throughout R > 1 and δ∈ (0, 1) as in the proposition statement, and we let R' := R/δ. We consider the set W_R, δ^n := {V^n_R'∪ V^n_-R'}∪{^-1_n(a̅_R'^n) ∪^-1_n(a_-R'^n)}⊂^n. For each n ∈, by possibly locally modifying the a priori embedding of the graph ^n in the infinite cylinder _2π, we can assume without loss of generality that each edge in ^n crosses the circles at height R' and -R' at most finitely many times. In particular, this implies that we can assume that the set W_R, δ^n contains at most finitely many points, and therefore, by Lemma <ref>, we can assume, without any loss of generality, that ^n contains all the points in W^n_R, δ. In what follows, in order to lighten up notation, we adopt the following notational convention V̂_S≡V̂^n_S, a̅≡a̅^n_R', a≡a^n_-R'. Furthermore, we denote by V^†_S the lift to the universal cover of V_S, and we write ^†, n(S) (resp. ^n(S)) as a shorthand for ^†, n(× [-S, S]) (resp. ^n(/2π× [-S, S])). We refer to Figure <ref> for an illustration of the sets involved in the proof of Proposition <ref>. Random walk notation. For ∈^†, n(R'), we consider the continuous time random walk {^n, _t}_t ≥ 0 on the lifted weighted graph (^†, n, c^†, n) started from ∈^†, n(R'). We recall that the continuous path of such a random walk is generated by piecewise linear interpolation at constant speed. We consider the following stopping times σ_ := infł{t ≥ 0 : ^n, _t ∈ V_R'^†∪ V_-R'^†}̊, τ_ := infł{t ≥ 0 : ^n, _t ∈ (^†_n)^-1(a̅) ∪ (^†_n)^-1(a)}̊, and we observe that, thanks to the definitions of a̅ and a, it holds that τ_≥σ_, for all ∈^†, n(R'). Looking at Figure <ref>, the stopping time σ_ accounts for the first time at which the random walk hits one of the blue vertices, while the stopping time τ_ accounts for the first time at which the random walk hits one of the green vertices. §.§.§ Some auxiliary results We can now state the key lemma for the proof of Proposition <ref>. The proof of the below result is postponed until the end of this subsection. For any n > n(R, δ) large enough, there exists a real number b'_n = b_n'(R, δ) such that ł|2 R' _ł(^n_τ_∈ (_n^†)^-1(a̅))̊ - b'_n - ()|̊≤δ, ∀∈^†, n(R). Similarly, for any n > n(R, δ) large enough, there exists a real number b”_n = b_n”(R, δ) such that ł|2 R' _ł(^n_τ_∈σ_2π^-1(_n^-1(a)))̊+ b”_n + ()|̊≤δ, ∀∈^†, n(R). Roughly speaking, the first estimate in the previous lemma states that, for n > n(R, δ) large enough, the probability that the lifted random walk started inside ^†, n(R) hits the set (_n^†)^-1(a̅) before hitting (_n^†)^-1(a) depends linearly on the height coordinate of the starting point of the walk on the a priori embedding. In order to prove this result, we need to rule out the possibility that the preimage of a horizontal line on the Smith embedding has large vertical fluctuations (see Figure <ref>). To do so, we use the invariance principle assumption on the sequence of primal graphs, and more precisely we will follow the following steps. * We start by proving that the probability that the lifted random walk started inside ^†, n(R) hits the set V^†_R' before hitting V^†_-R' depends linearly on the height coordinate of the starting point of the walk on the a priori embedding. This follows easily thanks to the invariance principle assumption and it is the content of Lemma <ref> below. * We then prove that the probability that the random walk started from V^†_-R' has probability of order 1/R' to hit (_n^†)^-1(a̅) before hitting (_n^†)^-1(a). Once again, this is an easy consequence of the invariance principle assumption, and it is the content of Lemma <ref> below. * Finally, roughly speaking, in order to conclude, we need to improve the bound on the probability appearing in the previous step from order 1/R' to order o_n(1/R'). This is done by using Lemma <ref> together with the invariance principle assumption. Indeed, as it will be more clear in the proof of Lemma <ref>, it is sufficient to estimate the probability that a random walk started inside ^n(R) does not disconnect V_R ∪ V_-R from V_R'∪ V_-R' before hitting the latter set. In Lemma <ref> below, we will see that such a probability is of order R'/R, and this will be enough to conclude. Before proceeding, we observe that, for all ∈^†, n(R), it holds that _ł(^n_τ_∈σ_2π^-1(_n^-1(a)))̊ = 1 - _ł(^n_τ_∈ (_n^†)^-1(a̅))̊, hence, from now on, we can just focus on the first estimate in the statement of Lemma <ref>. We can now state and prove the technical lemmas mentioned above. We start with the following lemma on which we study the probability that the lifted random walk started inside ^†, n(R) hits the set V^†_R' before hitting V^†_-R'. For any n > n(R, δ) large enough, it holds that ł| 2 R' _ł(^n_σ_∈ V_R'^†)̊ - R' - ()|̊≤δ, ∀∈^†, n(R). For n ∈, fix a vertex ∈^†, n(R), consider a planar Brownian motion B^ started from , and define the stopping time σ_B, := infł{t ≥ 0 : ł|(B^_t)|̊ = R'}̊. Then, by assumption <ref>, for any n > n(R, δ) large enough, we have that ł|_(^n_σ_∈σ_2π^-1(V_R')) - _ł((B_σ_B, ) = R')̊|̊≤δ/2 R', ∀∈^†, n(R). Since (B^) is just a linear Brownian motion started from (), thanks to the gambler's ruin formula, it holds that _ł((B_σ_B, ) = R')̊ = R' + ()/2 R' , from which the conclusion follows. We can now move to the second lemma which gives an estimate for the probability that the random walk started from V^†_-R' hits (_n^†)^-1(a̅) before hitting (_n^†)^-1(a). For any n > n(R, δ) large enough, it holds that _ł(^n_τ_∈ (_n^†)^-1(a̅))̊≲1/R', ∀∈ V_-R'^†, where the implicit constant is independent of everything else. We start by noticing that, for all ∈ V_-R'^†, it holds that _ł(^n_τ_∈ (_n^†)^-1(a̅))̊≤_ł(σ_2π(^n|_[0, τ_]) does not wind around the cylinder below height -R')̊, where we recall that σ_2π is defined in (<ref>) and denotes the covering map of the infinite cylinder _2π. The above inequality is due to the fact that, if σ_2π(^n|_[0, τ_]) winds around the cylinder below height -R', then, by definition of a, it has to hit the set ^-1_n(a). We can now exploit assumption <ref> and find the corresponding upper bound for the Brownian motion. More precisely, let B^ be a planar Brownian motion started from ∈σ_2π^-1(V_-R') and define the stopping time τ_B, := infł{t ≥ 0 : (B^_t) = -2R' or (B^_t) = R'}̊. Then, for any n > n(R, δ) large enough, we have that _ł(σ_2π(^n|_[0, τ_]) does not wind around the cylinder below height -R')̊ ≤ 2 _ł(σ_2π(B|_[0, τ_B, ]) does not wind around the cylinder below height -R')̊. Therefore, to conclude, it is sufficient to find a uniform upper bound for the quantity on the right-hand side of the above expression. This is done in Lemma <ref> from which the conclusion follows. In order to prove Lemma <ref>, we also need the following lemma which provides an estimate for the probability that a random walk started inside ^n(R) disconnects V_R ∪ V_-R from V_R'∪ V_-R' before hitting the latter set. For any n > n(R, δ) large enough, it holds that _ł(σ_2π(^n|_[0, σ_]) does not disconnect V_R∪ V_-R from V_R'∪ V_-R')̊≲R/R', ∀∈^†, n(R), where the implicit constant is independent of everything else. For ∈^†, n(R), let B^ be a planar Brownian motion started from , and define the stopping time σ_B, : = infł{t ≥ 0 : |(B_t^)| = R'}̊. By assumption <ref>, we know that for any n > n(R, δ) large enough, it holds that _ł(σ_2π(^n|_[0, σ_]) does not disconnect V_R∪ V_-R from V_R'∪ V_-R')̊ ≤ 2 _ł(σ_2π(B|_[0, σ_B, ]) does not disconnect /2π×{-R, R} from /×{-R', R'})̊. Therefore, it is sufficient to find a uniform upper bound for the quantity on the right-hand side of the above expression. This is the content of Lemma <ref> from which the desired result follows. We are now ready to give a proof of Lemma <ref>. As we have already remarked, this will be a consequence of the previous three lemmas and of Lemma <ref>. We start by defining the function ^†_n : ^†, n(R') → as follows ^†_n() : = _ł(^n_τ_∈ (_n^†)^-1(a̅))̊ - _ł(^n_σ_∈ V_R'^†)̊, ∀∈^†, n(R'), so that, we can write _ł(^n_τ_∈ (_n^†)^-1(a̅))̊ = _ł(^n_σ_∈ V_R'^†)̊ + ^†_n(), ∀∈^†, n(R'). We now observe that, thanks to Lemma <ref>, for any n > n(R, δ) large enough, it holds that ł|2 R' _ł(^n_σ_∈ V_R'^†)̊ - R' - ()|̊≤δ, ∀∈^†, n(R). Therefore, it is sufficient to study the function ^†_n appearing in (<ref>). To this end, we consider the functions ^†_n : ^†, n(R') → [0,1] and ^†_n : ^†, n(R') → [0,1] defined as follows ^†_n() := _ł(^n_σ_∈ V_-R'^†, ^n_τ_∈ (_n^†)^-1(a̅))̊, ^†_n() := _ł(^n_σ_∈ V_R'^†, ^n_τ_∈ (_n^†)^-1(a))̊. In particular, as one can easily check, it holds that ł|_n^†()|̊≤_n^†() + _n^†(), ∀∈^†, n(R'), and so, we can reduce the problem to the study of the functions ^†_n and ^†_n. We will only study the function ^†_n as the function ^†_n can be treated similarly. Thanks to the strong Markov property of the random walk ^n,, we have that ^†_n() = _ł[^†_n(^n_σ_)]̊, ∀∈^†, n(R'). Therefore, for , ∈^†, n(R), it holds that ł|^†_n() - ^†_n()|̊≤supł{ł|^†_n()̌|̊ : ∈̌V_-R'^†}̊ł(^n, _σ_, ^n, _σ_)̊, where denotes the total variation distance. Hence, it is sufficient to find an upper bound for the two terms on the right-hand side of (<ref>). We treat the two factors separately. * In order to bound the first factor, we just need to bound uniformly on ∈̌V_-R'^† the probability that a random walk on (^†, n, c^†, n) started from $̌ hits(_n^†)^-1(a̅)before hitting(_n^†)^-1(a). This is exactly the content of Lemma <ref> from which we can deduce that, for alln > n(R, δ)large enough, it holds that supł{ł|^†_n()̌|̊ : ∈̌V_-R'^†}̊≲1/R', where the implicit constant is universal. * In order to bound the second factor, we can use Lemma <ref>. More precisely, it is sufficient to estimate the probability thatσ_2π(^n,|_[0, σ_])disconnectsV_R ∪V_-RfromV_R' ∪V_-R'. This is exactly the content of Lemma <ref> which guarantees that, for alln > n(R, δ)large enough and for all∈^†, n(R), it holds that _ł(σ_2π(^n|_[0, σ_]) does not disconnect V_R∪ V_-R from V_R'∪ V_-R')̊≲R/R', where the implicit constant is independent of everything else. Therefore, this fact together with Lemma <ref> imply that ł(^n, _σ_, ^n, _σ_)̊≲R/R', ∀, ∈^†, n(R). Therefore, putting together the previous two bullet points and going back to (<ref>), we get that for everyn > n(R, δ)large enough, it holds that ł|^†_n() - ^†_n()|̊≲R/R'^2, ∀, ∈^†, n(R), Furthermore, the same uniform bound can be also obtained for the function^†_n, but we omit the details since the argument is similar. Summing up, we obtained that, for alln > n(R, δ)large enough, it holds that ł|^†_n() - ^†_n()|̊≲R/R'^2, ∀, ∈^†, n(R). Hence, to conclude, we can simply proceed as follows. For everyn > n(R, δ)large enough, fix an arbitrary vertex∈^†, n(R). Then, recalling that by definitionR' = R/δ, it holds that 2 R' |^†_n() -^†_n()| ≲δ, ∀∈^†, n(R). Therefore, thanks to (<ref>) and estimate (<ref>), we find that, for anyn > n(R, δ)large enough, it holds that ł|2 R' _ł(^n_τ_∈ (_n^†)^-1(a̅))̊ - b'_n - ()|̊≤δ, ∀∈^†, n(R), where b'_n := 2 R' ^†_n() + R' . §.§.§ Proof of Proposition <ref> We are now ready to prove Proposition <ref>. In what follows, we will make use of some notation introduced in the preceding subsection. In particular, we will consider the stopping timesσ_x,τ_xdefined in (<ref>), and the quantities defined in the introduction of the preceding subsection. We will also adopt the same notational conventions of the previous subsection in order to lighten up some notation. We divide the proof in two steps. Step 1. In this first step we show that, for fixed R > 1 and δ∈ (0, 1), for any n > n(R, δ) large enough, we can find real numbers b^R, δ_n and c^R, δ_n such that the conclusion of the proposition holds. To this end, let R' := R/δ, fix ∈^†, n(R) and let ^n, be the random walk on (^†, n, c^†, n) started from . Thanks to the harmonicity of the height coordinate function ^†_n and to the optional stopping theorem, we have that ^†_n() = a̅_ł(^n_τ_∈ (_n^†)^-1(a̅))̊ + a_ł(^n_τ_∈ (_n^†)^-1(a))̊, ∀∈^†, n(R). Therefore, the problem has been reduced to proving that the probabilities appearing in the previous expressions are approximately affine transformations of the height coordinate function in the a priori embedding. By Lemma <ref>, for all n > n(R, δ) large enough, there exist real numbers b_n' = b_n'(R, δ) and b_n” = b_n”(R, δ) for which, for all ∈^†, n(R), it holds that ł|2 R' ^†_n() - a̅ (b_n' + ()) + a (b_n” + ()|̊ ≤a̅ł|2 R' _ł(^n_τ_∈ (_n^†)^-1(a̅))̊ - b_n' - ()|̊ + ał| 2 R' _ł(^n_τ_∈ (_n^†)^-1(a))̊ + b_n” + ()|̊ ≤ł|2 R' _ł(^n_τ_∈ (_n^†)^-1(a̅))̊ - b_n' - ()|̊ + ł| 2 R' _ł(^n_τ_∈ (_n^†)^-1(a))̊ + b_n” + ()|̊ ≤δ. Therefore, rearranging the terms in the above expression and letting c^R, δ_n :=2R'/|a̅ -a|, b^R, δ_n := ab_n”-a̅b_n'/|a̅ -a|, we find that, for any n>n(R, δ) large enough, it holds that ł|c^R, δ_n ^†_n() + b^R, δ_n - ()|̊≤δ, ∀∈^†, n(R). Step 2. In this second step, we show how we can define real sequences {b_n}_n ∈ and {c_n}_n ∈ independent of R, δ such that the conclusion of the proposition holds. To this end, consider an increasing sequence {R_k}_k ∈⊂ [1, ∞) and a decreasing sequence {δ_k}_k ∈⊂ (0, 1) such that R_k →∞ and δ_k → 0, as k →∞. Then, thanks to the previous step, we know that, for all k ∈, it exists n_k = n_k(R_k, δ_k) ∈ such that, for all n > n_k, there exist real numbers c_n^R_k, δ_k, b_n^R_k, δ_k∈ such that ł|c^R_k, δ_k_n ^†_n() + b^R_k, δ_k_n - ()|̊≤δ_k, ∀∈^†, n(R_k). Without any loss of generality, we can assume that the sequence {n_k}_k ∈ is increasing. Then, for all n ∈ [0, n_1) ∩, we let c^_n := 1, b^_n := 1, and for all k ∈ and n ∈ [n_k, n_k+1) ∩, we set c^_n := c^R_k, δ_k_n, b^_n := b^R_k, δ_k_n. Therefore, if we fix R > 1 and δ∈ (0, 1), for all n > n(R, δ) large enough, it holds that ł|c^_n ^†_n() + b^_n - ()|̊≤δ, ∀∈^†, n(R), which concludes the proof. §.§ Width coordinate For eachn ∈, consider the lifted width coordinate function ^†_n:^†, n→ , as defined in (<ref>). The main goal of this subsection is to show that there exists an affine transformation of the function^†_nthat is asymptotically close, uniformly on lifts of compact subsets of the infinite cylinder_2π, to the width coordinate function↦()in the a priori lifted embedded graph^†, n, asn →∞. More precisely, we have the following result. There exits a sequence {b^_n}_n ∈⊂ such that for all R > 1, δ∈ (0, 1), and for any n > n(R, δ) large enough, it holds that |2π/η_n^†_n() + b^_n - ()| ≤δ, ∀∈^†, n(× [-R, R]), where we recall that η_n denotes the strength of the flow induced by _n as defined in (<ref>). The proof of this proposition is postponed to Subsection <ref>. As we will see, the proof is based on Lemma <ref> below and the following two facts: (a) the harmonicity of the lifted width coordinate function^†_n; (b) the invariance principle assumption on the sequence of dual graphs. §.§.§ Setup of the proof For eachn ∈, consider the metric graphs^nand^nassociated to^nand^n, respectively. Let_n : ^n →[0, 1]be the extended height coordinate function as specified in Remark <ref>, and_n : ^n →be the extended width coordinate function as specified in Remark <ref>. Given a valueS ∈, we define the set V̂^n_S := ł{x̂∈^n : (x̂) = S}̊. Given a valuea ∈(0, 1), we recall that the sets_a^†,^†_a,_a^†,_a^†are all defined in Subsection <ref>. We also recall that^n,a_0denotes the vertex in^†, nwhose real coordinate is nearest to0, and we refer to (<ref>) and (<ref>) for the definitions of the sets^†, n_aand^†, n_a, respectively. We fix throughoutR > 1andδ∈(0, 1)as in the proposition statement, and we let R' := R/δ. For eachn ∈, we considera̅_R'^nanda_-R'^nas defined in (<ref>). Moreover, thanks to Proposition <ref>, for anyn > n(R, δ)large enough, we can choose a valuea^n_R'such that the set_n^-1(a^n_R')is fully contained in the infinite strip×[-1,1]. In what follows, in order to lighten up notation, we adopt the following notational convention V̂_S≡V̂^n_S, a̅≡a̅^n_R', a≡a^n_-R', a ≡ a^n_R' . Furthermore, we denote byV̂^†_Sthe lift to the universal cover ofV̂_S, and we write^†, n(S)(resp.^n(S)) as a shorthand for^†, n(×[-S, S])(resp.^n(/2π×[-S, S])). We also adopt the convention to fix the additive constant of the lifted width coordinate function^†_nby setting it to zero on the point^n, a_0. We observe that changing the base vertex of the lifted width coordinate function has only the effect of changing the value ofb_n^appearing in the statement of Proposition <ref>. Similarly to what we discussed in the preceding subsection, by Lemma <ref> and Remark <ref>, we can assume, without loss of generality, that^ncontains all the points in the set _n^†(a̅) ∪_n^-1(a) ∪_n^-1(a) ⊂^n, and that^ncontains all the points in the set V̂_R'-1∪V̂_-R'+ 1⊂^n. This is allowed since, as in the case of the height coordinate function, by possibly locally modifying the a priori embedding of the dual graph^nin_2π, we can assume that each edge in^ncrosses the circles at heightR'-1and-R'+1at most finitely many times. We refer to Figure <ref> for an illustration of the sets involved in the proof of Proposition <ref>. Random walk notation. For∈^†, n(R'-1), we consider the continuous time random walk{_t^n, }_t ≥0on the lifted weighted dual graph(^†, n, ĉ^†, n)started from. We recall that the continuous path of such a random walk is simply generated by piecewise linear interpolation at constant speed. We consider the following stopping times σ_ := infł{t ≥ 0 : ^n, _t ∈V̂^†_R'-1∪V̂^†_-R'+1}̊, τ_ := infł{t ≥ 0 : ^n, _t ∈^†, n_a̅∪^†, n_a}̊. As we will observe more precisely below, forn > n(R, δ)large enough, thanks to Proposition <ref>, we have that ^†, n_a̅⊂× [R'-1, R'+1], ^†, n_a⊂× [-R'-1, -R'+1], and so, it holds thatτ_ ≥σ_, for all∈^†, n(R'-1). Looking at Figure <ref>, the stopping timeσ_accounts for the first time at which the random walk hits one of the red dual vertices, while the stopping timeτ_accounts for the first time at which the random walk hits one of the blue dual vertices at the top or bottom of the figure. In what follows, we also need to consider random walks on the sequence of primal lifted weighted graphs. To be precise, letabe as specified in the introduction of this section (we recall that here,ais a shorthand fora_R'^n) and consider the probability measureμ_aon the set^-1(a)as specified in Definition <ref>. We letX^n, μ_abe the random walk on the weighted graph(^n, c^n)started from a point in^-1_n(a)sampled according toμ_a. We also consider the associated continuous time lifted random walk{^n, μ_a_t}_t ≥0on(^†, n, c^†, n)started from a point in^†, n_asampled according toμ_a. We define the following stopping times θ_+ := infł{t ≥ 0 : ^n, μ_a_t ∈ (_n^†)^-1(a̅)}̊ , θ_- := infł{t ≥ 0 : ^n, μ_a_t ∈ (_n^†)^-1(a)}̊. Furthermore, we will also need to consider the lifted Smith-embedded random walk{_t^n ,μ_a}_t ≥0associated to^n ,μ_aas specified in Subsection <ref>. §.§.§ Some auxiliary results We can now state the key lemma for the proof of Proposition <ref>. The proof of the below result is postponed until the end of this subsection. For any n > n(R, δ) large enough it exists a finite constant b'_n ∈ such that |_[(^n_τ_x̂)/2 π - _n^†(^n_τ_x̂) + b'_n/η_n] | ≲ 1, ∀∈^†, n(R'-1), where the implicit constant is universal, and η_n denotes the strength of the flow induced by _n as defined in (<ref>). We now explain how the proof of Lemma <ref> is structured. For eachn ∈, we consider the map^†, rand_n, defined in Definition <ref>, that assigns to each vertex ∈^†, nthe random variable_n^†, rand()which is uniformly distributed on the horizontal segment_n^†(). The proof of Lemma <ref> can be basically divided into four main steps. * We start by considering the continuous time lifted random walk ^n ,μ_a. In Lemma <ref>, we prove that, for M ∈, the conditional probability, given θ_+ < θ_-, of the event |(^n, μ_a_θ_+) - (^n, μ_a_0)| > M R' decays exponentially in M. Using this result, we then prove in Lemma <ref> that the conditional expectation of (^n, μ_a_θ_+)- (^n, μ_a_0), given θ_+ < θ_-, is of order one. Both the proofs of Lemmas <ref> and <ref> are based on the invariance principle assumption on the sequence of primal graphs. * We then consider the Smith-embedded random walk ^n ,μ_a associated with ^n ,μ_a. In Lemma <ref>, we prove that the conditional expectation of (^n, μ_a_θ_+), given θ_+ < θ_-, is of order η_n. This result is basically a consequence of Lemma <ref>, which guarantees that the expected horizontal winding of the Smith-embedded random walk is equal to zero. * In Lemma <ref>, using the relation between the maps ^†, rand_n and ^†_n, together with the results proved in the previous steps, we prove that it exists a finite constant b_n' ∈ such that the values of the width coordinate function in ^†, n_a̅ plus b'_n is of order η_n. The key input to prove such a result comes from Lemma <ref> in which we prove that the probability that the random walk ^n, μ_a travels a large horizontal distance in a “narrow horizontal tube” decays exponentially fast. The proof of this fact follows from the invariance principle assumption on the sequence of primal graphs. * Finally, in Lemma <ref>, we state the analogous dual result of Lemma <ref>. This fact and the periodicity of the Smith diagram will allow us to deduce Lemma <ref>. Let us emphasize that all the results explained above hold also with the role ofθ_+andθ_-interchanged. In particular, the same result stated in point (c) holds with the set^†, n_a̅replaced by^†, n_a. Before proceeding with the precise statements and the proofs of the above mentioned lemmas, we observe that, by Proposition <ref>, for any ∈ (0, 1) and for any n > n(R, δ, ) large enough, it holds that (_n^†)^-1(a̅) ⊂× [R' - , R' + ], (_n^†)^-1(a) ⊂× [-R'-, -R'+]. Furthermore, by definitions of ^†, n_a̅, ^†, n_a and Lemma <ref>, we have that, for any ∈ (0, 1) and for any n > n(R, δ, ) large enough, it holds that ^†, n_a̅⊂× [R' - , R' + ], ^†, n_a⊂× [-R'-, -R'+]. These facts will be of key importance in the remaining part of this subsection and they will be used several times. We can now proceed to state and prove the technical lemmas mentioned above. We start with the following lemma which states that, forM ∈, the conditional probability of the event|(^n, μ_a_θ_+) - (^n, μ_a_0)| > M R', givenθ_+ < θ_-, decays exponentially inM. Heuristically speaking, this is due to the fact that, after each time that the random walk^n, μ_atravels horizontal distanceR', there is a positive chance of hitting the set(_n^†)^-1(a̅). There exists a universal constant C > 0 such that, for any n > n(R, δ) large enough, it holds that _μ_ał(|(^n_θ_+) - (^n_0)|> M R' |θ_+ < θ_-)̊≲exp(-C M), ∀ M ∈. where the implicit constant is independent of everything else. The same conclusion holds with the role of θ_+ and θ_- interchanged. As observed in Remark <ref>, for any n > n(R, δ) large enough, we can assume that (_n^†)^-1(a̅) ⊂× [R', R' + 1], (_n^†)^-1(a) ⊂× [- R'-1, - R'], (_n^†)^-1(a) ⊂× [-1, 1]. Now, letting θ := θ_+∧θ_- and M ∈, we can write _μ_ał(|(^n_θ_+) - (^n_0)|> M R' |θ_+ < θ_-)̊≤_μ_ał(|(^n_θ) - (^n_0)|> M R')̊/_μ_ał(θ_+ < θ_-)̊. We note that, thanks to assumption <ref> and (<ref>), for any n >n(R, δ) large enough, the probability on the denominator can be lower bounded by a constant independent of everything else. Therefore, we can just focus on the probability appearing on the numerator. To this end, let ρ_0 : = 0, and for k ∈_0 define inductively ρ_k+1 := infł{t ≥ρ_k : ł|(^n , μ_a_t) - (^n , μ_a_ρ_k)|̊≥ R'}̊. Moreover, for k ∈_0, consider the event A^n_k := ł{^n, μ_a|_[ρ_k, ρ_k+1]⊄׳[(^n, μ_a_ρ_k) - 3R', (^n, μ_a_ρ_k) + 3R']̊}̊. We observe that, thanks to the strong Markov property of the random walk, the events {A^n_k}_k ∈_0 are independent and identically distributed. Moreover, thanks to assumption <ref>, to estimates (<ref>), and to well-known properties of Brownian motion, for any n ∈ large enough, the event A^n_0 happens with uniformly positive probability p independent of everything else. Therefore, we have that _μ_ał(|(^n_θ_+) - (^n_0)|> M R')̊≤_μ_a(⋂_i = 0^M A̅^̅n̅_̅i̅) = (1-p)^M, from which the desired result follows. Finally, the same argument also applies with the role of θ_+ and θ_- interchanged. In the next lemma, we use Lemma <ref> to prove that the conditional expectation of(^n, μ_a_θ_+)- (^n, μ_a_0), givenθ_+ < θ_-, is of order one. For any n > n(R, δ) large enough, it holds that ł|_μ_ał[(^n_θ_+) - (^n_0) |θ_+ < θ_-]̊|̊≲ 1, where the implicit constant is universal. The same result holds with the role of θ_+ and θ_- interchanged. The proof of this result is based on assumption <ref>. More precisely, let B^μ_a be a planar Brownian motion started from a point in ^†, n_a sampled according to the probability measure μ_a, and consider the following stopping times θ_B,+ := infł{t ≥ 0 : (B_t^μ_a) = R'}̊, θ_B,- := infł{t ≥ 0 : (B_t^μ_a) = -R'}̊. Since (B^μ_a) and (B^μ_a) are independent, and since the stopping times θ_B, + and θ_B,- only depend on (B^μ_a), thanks to well-known properties of Brownian motion, we have that _μ_ał[(B_θ_B, +) - (B_0) |θ_B, + < θ_B, -]̊ = 0. Furthermore, as observed in Remark <ref>, for any ∈ (0, 1) and for any n > n(R, δ, ) large enough, it holds that (_n^†)^-1(a̅) ⊂× [R', R' + ], (_n^†)^-1(a) ⊂× [- R'-, - R']. In particular, this fact and Lemma <ref> imply that the set (_n^†)^-1(a̅) (resp. (_n^†)^-1(a)) converges in the Hausdorff metric to the horizontal line ×{R'} (resp. ×{-R'}). Therefore, thanks to assumption <ref>, the following weak convergence of laws holds lim_n →∞Lawł((^n, μ_a_θ_+) - (X^n, μ_a_0) |θ_+ < θ_-)̊ = Lawł((B^μ_a_θ_B, +) - (B^μ_a_0) |θ_B, + < θ_B, -)̊. Hence, thanks to Lemma <ref> and Vitali's convergence theorem, for any n > n(R, δ) large enough, it holds that ł|_μ_ał[(^n_θ_+) - (^n_0) |θ_+ < θ_-]̊ - _μ_ał[(B_θ_B, +) - (B_0) |θ_B, + < θ_B, -]̊|̊≤ 1. Hence, putting together (<ref>) and (<ref>), we obtain the desired result. Finally, the same argument also applies with the role of θ_+ and θ_- interchanged. In the next lemma, we see how we can use Lemma <ref> to prove that that the conditional expectation of(^n, μ_a_θ_+), givenθ_+ < θ_-, is of orderη_n. For any n ∈, it holds that ł|_μ_ał[(^n_θ_+) |θ_+ < θ_-]̊|̊≤η_n, and the same with the role of θ_+ and θ_- interchanged. Since we are assuming that the base vertex ^n, a_0 of the lifted width coordinate function ^†_n belongs to the set ^†, n_a, then, thanks to Lemma <ref>, it holds that ł| (^†, rand_n()) |̊≤η_n, ∀∈^†, n_a, uniformly over all possible realizations of ^†, rand_n(). In particular, since the embedded random walk ^n, μ_a is started from a point in the set ^†, n_a, then it follows that ł| (^n, μ_a_0) |̊≤η_n, uniformly over all possible realizations of ^n, μ_a_0. Now, we would like to apply Lemma <ref> in order to conclude. However, we notice that we cannot directly apply such a result since, a priori, it does not hold that ∪_x ∈^n_n^-1(_n(x)) ⊆^n. In order to overcome this issue, we could consider the weighted graph associated to (^n, c^n) and ∪_x ∈^n_n^-1(_n(x)) as specified in Definition <ref>. We could then apply Lemma <ref> to the random walk on this new weighted graph and then transfer the result to the original weighted graph by means of Lemma <ref>. In order to lighten up the proof, we will assume directly that ∪_x ∈^n_n^-1(_n(x)) = ^n. Since the stopping time θ_+ is almost surely finite, we can proceed as follows 1 _μ_ał[_η_n(^n|_[0, θ_+]) |θ_+ < θ_-]̊ = 1/_μ_a(θ_+ < θ_-)_μ_a[∑_N ∈_η_n(^n|_[0, θ_+]) 1_{θ_+ < θ_-, θ_+ = N}] = 1/_μ_a(θ_+ < θ_-)∑_N ∈_μ_ał[_η_n(^n|_[0, θ_+]) |θ_- > N, θ_+ = N]̊_μ_a(θ_- > N, θ_+ = N). In order to pass from the first line to the second line of the above expression, we used the fact that ł[ł|_η_n(^n, μ_a|_[0, θ_+])|̊]̊ < ∞, and Fubini's theorem. The reason why (<ref>) holds is an immediate consequence of Lemmas <ref> and <ref>. In order to conclude, it is sufficient to prove that the expectation in each summand of the sum appearing in (<ref>) is equal to zero. To this end, fix N ∈ and consider a sequence of admissible height coordinates [a_N]_0 ⊂ (0, 1) for the random walk X^μ_a, as specified in Definition <ref>, such that a_m > a for all m ∈ [N]_0 and a_N = a̅. Thanks, to Lemma <ref>, we have that _μ_ał[_η_n(^n|_[0, N]) |{_n(X_m) = a_m}_m = 1^N]̊ = 0, uniformly over all such sequences of height coordinates. Hence, this is sufficient to conclude that _μ_ał[_η_n(^n|_[0, θ_+]) |θ_- > N, θ_+ = N]̊ = 0. Therefore, the conclusion follows thanks to (<ref>), and to the fact that (<ref>) and (<ref>) imply that _μ_a[_η_n(^n|_[0, θ_+]) |θ_+ < θ_-] = 0. Finally, we observe that the same argument also applies with the role of θ_+ and θ_- interchanged. In order to prove Lemma <ref>, we basically need to prove that the difference between the value of the width coordinate in the set^†, n_a̅and in the set^†, n_ais of orderη_n. This fact is the content of Lemma <ref> below. However, in order to prove this fact, we first need to prove that it is extremely unlikely for the random walk^n, μ_ato travel a large horizontal distance in a “narrow horizontal tube”. We refer to Figure <ref> for a diagram illustrating the various objects involved in the proof of Lemma <ref>. Fix ∈ (0, 1), M ∈, and define the following event A_M, ^n, + := ł{∃ s, t ∈ [0, θ_+] : ł|(^n, μ_a_t) - (^n, μ_a_s)|̊ > M ; ł|(^n, μ_a_u)|̊∈ [R' - , R' + ], ∀ u ∈ [s, t]}̊. There exists a universal constant C > 0 such that, for any n > n(R, δ, ) large enough, it holds that _μ_ał(A_M, ^n, +|θ_+ < θ_-)̊≲exp(-C M/), ∀ M ∈, where the implicit constant is independent of everything else. Furthermore, the same conclusion holds with the role of θ_+ and θ_- interchanged. Fix ∈ (0, 1). We start by recalling that, as observed in Remark <ref>, for any n > n(R, δ, ) large enough, it holds that (_n^†)^-1(a̅) ⊂× [R' - , R' + ]. It is easy to see that, without any loss of generality, we can assume that ^n contains all the points in the set ł{x ∈^n : (x) = R' - }̊. Therefore, we can define the stopping time ρ_0 : = infł{t ≥ 0 : (^n, μ_a_t) = R'- }, and for k ∈_0 we define inductively ρ̃_k+1 := infł{t ≥ρ_k : ł|(^n, μ_a_t) - (^n, μ_a_ρ_k)|̊ > M }, ρ_k+1 := infł{t ≥ρ̃_k+1 : (^n, μ_a_t) = R' - }̊. Moreover, for k ∈ we consider the events A_M, ^n, +(ρ_k-1, ρ̃_k) := ł{(^n, μ_a|_[ρ_k-1, ρ̃_k]) ⊂ [R'-, R'+]}̊ , F_k := ł{ρ_k > τ̂_}̊. Let K be the smallest k ∈ such that F_k occurs. Then, we have that 1_μ_ał(A_M, ^n, +|θ_+ < θ_-)̊ ≤_μ_a(⋃_i = 1^KA_M, ^n, +(ρ_i-1, ρ̃_i) |θ_+ < θ_-) ≤∑_k ∈∑_i = 1^k_μ_a(A_M, ^n, +(ρ_i-1, ρ̃_i)|θ_+ < θ_-) _μ_ał(K = k |θ_+ < θ_-)̊. Thanks to the strong Markov property of the random walk, the events {A_M, ^n, +(ρ_k-1, ρ̃_k)}_k ∈ are conditionally independent and identically distributed given θ_+ < θ_-. Now, thanks to assumption <ref> and well-known properties of Brownian motion, it is possible to prove that it exists a universal constant C > 0 such that for any n ∈ large enough it holds that _μ_ał(A_M, ^n, +(ρ_0, ρ̃_1) |θ_+ < θ_-)̊≲expł(-C M / )̊, where the implicit constant is independent of everything else. More precisely, in order to obtain the above upper bound, it is sufficient to study the probability that the random walk travels horizontal distance M before exiting an infinite horizontal band of height of order . This can be done by proceeding similarly to the proof of Lemma <ref>, and so we do not detail the argument here. Now, for k ∈, we consider the event B_M^n, +(ρ_k-1, ρ̃_k) := ł{(^n, μ_a|_[ρ_k-1, ρ̃_k]) ⊂ [-R'-1, R'+1]}̊. For the same reason explained above, we have that the events {B_M^n, +(ρ_k-1, ρ̃_k)}_k ∈ are conditionally independent and identically distributed given θ_+ < θ_-. Also, using again assumption <ref> and a standard gambler's ruin estimate for Brownian motion, one can prove that _μ_ał(B_M^n, +(ρ_0, ρ̃_1) |θ_+ < θ_-)̊≲ (M+1)^-1, where the implicit constant is independent of everything else. In particular, thanks to (<ref>), for any k ∈, it holds that _μ_ał(K = k |θ_+ < θ_-)̊ ≤_μ_a(⋂_i = 1^k-1B_M^n, +(ρ_i-1, ρ̃_i) |θ_+ < θ_-) = ∏_i=1^k-1_μ_ał(B_M^n, +(ρ_0, ρ̃_1) |θ_+ < θ_-)̊ ≲ (M+1)^-(k-1). Therefore, putting together (<ref>), (<ref>) and (<ref>), we obtain that it exists a universal constant C > 0 such that _μ_ał(A_M, ^n, +|θ_+ < θ_-)̊≲expł(-C M / )̊∑_k ∈ k (M+1)^-(k-1)≲expł(-C M / )̊. Finally, we observe that the result with the role of θ^+ and θ^- interchanged can be obtained in the same way. Therefore, this concludes the proof. We are now ready to prove that the difference between the value of the width coordinate in the set^†, n_a̅and in the set^†, n_ais of orderη_n. We refer to Figure <ref> for a diagram illustrating the various objects involved in the proof of Lemma <ref>. For any n > n(R, δ) large enough it exists a finite constant b'_n ∈ such that ł|^†_n() + b_n'|̊≲η_n, ∀∈^†, n_a̅∪^†, n_a, where the implicit constant is independent of everything else. We start by letting θ := θ_+∧θ_- and defining b_n' := _μ_ał[(^n_0) - (^n_θ)]̊. We observe that b_n' is not of constant order in general. Indeed, this is due to the fact that (^n, μ_a_0) can be far from being of order one since the starting point of the random walk ^n, μ_a is sampled from the set ^†,n_a over which we do not have any a priori control. We will only prove the result in the case ∈^†, n_a̅ since the result for ∈^†,n_a can be obtained similarly. In particular, we split the proof in two steps. Step 1: In this first step, we claim that for any n > n(R, δ) large enough it holds that ł|(^†, rand_n()) + b_n'|̊≲η_n , ∀∈^†, n_a̅, uniformly over all possible realizations of ^†, rand_n(), and where the implicit constant does not depend on anything else. To this end, we consider the vertex ^n, μ_a_θ_+∈^†, n_a̅ such that σ_2π(^n, μ_a_θ_+) = σ_2π(^n, μ_a_θ_+). Now, applying Lemma <ref>, we see that |(^†, rand_n(^n, μ_a_θ_+)) - (^n, μ_a_θ_+)/η_n - (^n, μ_a_θ_+) - (^n, μ_a_θ_+)/2π| ≤ 1. Therefore, rearranging the various terms in the previous inequality and then taking the conditional expectation given θ_+ < θ_-, we get that 1ł| _μ_ał[(^†, rand_n(^n_θ_+)) |θ_+ < θ_-]̊ + b_n'|̊ ≲ł|_μ_ał[(^n_θ_+) |θ_+ < θ_-]̊|̊ + η_n ł|_μ_ał[(^n_θ_+) - (^n_0) |θ_+ < θ_-]̊|̊ + η_n ł|b_n' + _μ_ał[(^n_θ_+) - (_0^n) |θ_+ < θ_-]̊|̊. Thanks to Lemmas <ref> and <ref>, we have that the following inequalities hold for any n > n(R, δ) large enough ł|_μ_ał[(^n_θ_+) |θ_+ < θ_-]̊|̊≤η_n, ł|_μ_ał[(^n_θ_+) - (^n_0) |θ_+ < θ_-]̊|̊≲ 1. Therefore, the claim follows if we prove that the term in the third line of (<ref>) is of order one. More precisely, thanks to the definition of b_n', it is sufficient to prove that, for any n>n(R, δ) large enough, it holds that ł|_μ_ał[(^n_θ)]̊ - _μ_ał[(^n_θ_+) |θ_+ < θ_-]̊|̊≲ 1. Since, for any n>n(R, δ) large enough, _μ_a(θ_+ < θ_-) is of constant order, this is equivalent to prove that ł|_μ_ał[(^n_θ_+) |θ_+ < θ_-]̊|̊ + ł|_μ_ał[(^n_θ_-) |θ_- < θ_+]̊|̊≲ 1, where the implicit constant must be independent of everything else. We note that this inequality follows easily from Lemma <ref> (see Figure <ref>). Hence, putting everything together and going back to (<ref>), we obtain that ł| _μ_ał[(^†, rand_n(^n_θ_+)) |θ_+ < θ_-]̊ + b_n'|̊≲η_n , where the implicit constant is independent of everything else. Therefore, the desired claim follows from the previous inequality and from the fact that, thanks to Lemma <ref>, it holds that ł|(^†, rand_n(_1)) - (^†, rand_n(_2))|̊≤ 2 η_n, ∀_1, _2 ∈^†, n_a̅, uniformly over all possible realizations. Step 2: In this step, we will actually prove the result in the lemma statement. To this end, we fix ∈_a̅^†, n and we consider the dual edges _1, _2 ∈^†, n_a̅ such that _1^+ = = _2^-. Furthermore, we let _1, _2 ∈^†, n_a̅ be the corresponding primal edges, and we let _1, _2 ∈^†, n be the endpoints of _1 and _2 in the set (_n^†)^-1(a̅) (see Figure <ref>). At this point, we need to split the proof in two different cases: * If _1 ≠_2, then, by construction of the Smith diagram, it follows that _n^†() ∈ł[(_n^†, rand(_1)), (_n^†, rand(_2))]̊, uniformly over all possible realizations. * If _1 = _2, then it holds that _n^†() ∈ł[minł{()̌ : ∈̌_n^†(_1)}, max{()̌ : ∈̌_n^†(_1)}̊]̊, where we recall that _n^†(_1) denotes the horizontal segment associated to _1 by the lifted tiling map. In both cases, if _1, _2 ∈^†, n_a̅, then the conclusion follows from the previous step. However, in general, it could be that _1 ∈^†, n_a̅ - (2π, 0) or _2 ∈^†, n_a̅ + (2π, 0). In both these cases, we cannot directly appeal to the previous step to conclude. Nevertheless, a simple application of Lemma <ref> implies that the same result of the first step holds also for the vertices in ^†, n_a̅ - (2π, 0) and in ^†, n_a̅ + (2π, 0). Therefore, this concludes the proof. Before proceeding with the proof of Lemma <ref>, we need to state a lemma which is the dual counterpart of Lemma <ref>. Fix ∈ (0, 1), ∈^†, n(R'-1), M ∈, and define the following event Â_M, ^n, := ł{∃ s, t ∈ [0, τ̂_] : ł|(^n, _t) - (^n, _s)|̊ > M ; ł|(^n, _u)|̊∈ [R' - , R' + ], ∀ u ∈ [s, t]}̊. There exists a universal constant C > 0 such that, for any n > n(R, δ, ) large enough, it holds that _ł(Â_M, ^n, )̊≲exp(-C M/), ∀ M ∈, where the implicit constant is independent of everything else. The proof of this lemma can be done by employing a similar argument to that used in Lemma <ref>. We are now ready to prove Lemma <ref>, which is now an immediate consequence of the results proved above. The proof basically consists of putting together some of the previous results. More precisely, fix ∈^†, n(R'-1) and let ^n, _τ̂_x̂∈_a̅^†, n∪_a^†, n be such that σ_2π(^n, _τ̂_x̂) = σ_2π(^n, _τ̂_x̂). Then, by Lemma <ref>, we have that _[(^n_τ̂_x̂)/2 π - _n^†(^n_τ̂_x̂) + b'_n/η_n] = _[(^n_τ̂_x̂)/2 π - _n^†(^n_τ̂_x̂) + b'_n/η_n]. Therefore, the result follows thanks to Lemma <ref> if we show that also ł|_[(^n_τ̂_x̂)]|̊≲ 1. We note that this fact is easily implied by Lemma <ref>, and so the proof is completed. §.§.§ Proof of Proposition <ref> In what follows, we will make use of some notation introduced in the preceding subsection. In particular, we will consider the stopping timesσ_,τ_defined in (<ref>), and the quantities defined in the introduction of the preceding subsection. We also adopt the same notational conventions of the previous subsection in order to lighten up some notation. For ∈^†, n(R), let ^n, be the random walk on the lifted weighted dual graph (^†, n, ĉ^†, n) started from . Moreover, let B^ be a planar Brownian motion started from , and define the stopping time τ_B, := infł{t ≥ 0 : |(B_t^)| = R'}̊. We divide the proof into several steps. Step 1. In this first step, we show that, for any n > n(R, δ) large enough, it holds that ł|_ł[(^n_τ_)]̊ - ()|̊≤δ, where we recall that the stopping time τ_ is defined in (<ref>). As we will see below, this is an easy consequence of assumption <ref>. We start by observing that, thanks to well-known properties of Brownian motion, it holds that ł|_[(B_τ_B, )] - ()|̊ = 0, ∀∈^†, n(R). Indeed, this follows from the fact that |(B_τ_B, )] - ()| has exponentially decaying tails and from the optional stopping theorem. As we observed in Remark <ref>, we have that, for any ∈ (0, 1) and for any n > n(R, δ, ) large enough, it holds that ^†, n_a̅⊂× [R' - , R' + ], ^†, n_a⊂× [-R'-, -R'+]. Therefore, from this fact and assumption <ref>, we can deduce that, for any n > n(R, δ) large enough, it holds that ł|_ł[(B_τ_B, )]̊- _ł[(^n_τ_x̂)]̊|̊≤δ. More precisely, this fact can be obtained from assumption <ref> by arguing in the same exact way as in the proof of Lemma <ref>. Hence, the desired result follows. Step 2. The main goal of this step is to prove that, for any n > n(R, δ) large enough, it holds that _ł[_n^†ł(^n_τ̂_)̊]̊ = _n^†(), ∀∈^†, n(R). We start by recalling that the function _n^† : ^†, n→ is harmonic, and so the process _n^†(^n, ) is a discrete martingale with respect to the filtration generated by ^n,. Therefore, if we prove that such a martingale is uniformly integrable, then the claim follows from the optional stopping theorem. To this end, it is sufficient to prove that, for M ∈, the probability of the event |_n^†(^n, _τ̂_) - _n^†()| > M R' decays exponentially fast in M, uniformly in ∈^†, n(R) and for all n > n(R, δ) large enough. This fact can be obtained from assumption <ref> by arguing in the same exact way as in the proof of Lemma <ref>. Hence, the desired result follows. Step 3. Consider the function ^†_n : ^†, n(R'-1) → defined as follows ^†_n() : = _[(^n_τ_)/2 π - _n^†(^n_τ_) + b'_n/η_n] , ∀∈^†, n(R'-1), where b'_n is the same constant appearing in the statement of Lemma <ref>. Now, recalling the definition (<ref>) of the stopping time σ_ and that σ_≤τ_, thanks to the strong Markov property of the random walk, for all , ∈^†, n(R), it holds that ł|^†_n() - ^†_n()|̊≤supł{ł|^†_n()|̊ : ∈^†, n(R'- 1)}̊ł(^n, _σ_, ^n, _σ_)̊, where denotes the total variation distance. Hence, it is sufficent to find an upper bound for the two terms on the right-hand side of (<ref>). We treat the two factors separately. * In order to bound the first factor, we just need to bound uniformly on ^†, n(R'-1) the quantity |^†_n()̌|. This is exactly the content of Lemma <ref> from which we can deduce that, for all n > n(R, δ) large enough, it holds that supł{ł|^†_n()|̊ : ∈^†, n(R'- 1)}̊≲ 1, where the implicit constant is independent of everything else. * In order to bound the second factor, we can use Lemma <ref>. Indeed, as we have already remarked, thanks to Proposition <ref> and to Lemma <ref>, for any n ∈ large enough, it holds that ^†, n_a̅⊂׳[R'-1, R'+1]̊, ^†, n_a⊂׳[-R'-1, -R'+1]̊. Therefore, it is sufficient to estimate the probability that σ_2π(^n, |_[0, σ_]) disconnects V̂_R∪V̂_-R from V̂_R'-1∪V̂_-R'+1. To this end, one can argue in the same exact way as in Lemma <ref> in order to prove that, for any n > n(R, δ) large enough and for all ∈^†, n(R), it holds that _(σ_2π(^n|_[0, σ̂_]) does not disconnect V̂_R∪V̂_-R from V̂_R' - 1∪V̂_-R' + 1) ≲R/R', where the implicit constant is independent of everything else. Therefore, this fact together with Lemma <ref> imply that ł(^n, x̂_σ̂_, ^n, _σ̂_ŷ)̊≲R/R', ∀, ∈^†, n(R). Therefore, putting together the two bullet points above, recalling that R'=R/δ, and going back to (<ref>), we find that, for any n>n(R, δ) large enough, it holds that ł|^†_n() - ^†_n()|̊≲δ, ∀, ∈^†, n(R), where the implicit constant is independent of everything else. Step 4. To conclude, for every n > n(R, δ) large enough, fix an arbitrary vertex ∈^†, n(R). Then, thanks to (<ref>), (<ref>), and (<ref>), we have that for any n > n(R, δ) large enough, it holds that |2π/η_n_n^†() + b^R, δ_n - ()| ≤δ, ∀∈^†, n(R), where b^R, δ_n := 2π^†_n(). Finally, in order to conclude, we need to remove the dependence of b^R, δ_n from R and δ. This can be easily done by arguing in the same exact way as in the second step of the proof of Proposition <ref>. Therefore, the proof is concluded. §.§ Proof of Theorem <ref> We are now ready to give a proof of the main theorem of this article. As we have already remarked, the proof of such a theorem is a consequence of Propositions <ref> and <ref>. Fix R > 1, δ∈ (0, 1), and consider a point ∈^†, n(R). We divide the proof into three main steps. Step 1. By definition of the Smith embedding ^†_n, we have that (^†_n()) = _n^†(). Hence, from Proposition <ref>, we know that there exist two real sequences {b^_n}_n ∈ and {c^_n}_n ∈, independent of R and δ, such that, for n> n(R, δ) large enough, it holds that ł|c^_n (^†_n()) + b^_n -()|̊≤δ/√(2), ∀∈^†, n(× [-R, R]). Step 2. Let ^†, n, ↓() = [_k] be the set of harmonically oriented edges in ^†, n with heads equal to ordered in such a way that _1⋯_k forms a counter-clockwise oriented path in the lifted dual graph ^†, n. Then, by construction of the Smith embedding ^†_n, we have that (^†_n()) ∈ł[^†(^-_1) , ^†(^+_k)]̊. Therefore, letting {b^_n}_n ∈ be the sequence in the statement of Proposition <ref>, we have that |2π/η_n(^†_n()) + b^_n -()| ≤|2π/η_n_n(_1^-) + b^_n - (_1^-)| + ł|(^-_1) -()|̊ + 2π/η_nł|(_n^†()) - ^†_n(_1)|̊. The first term on the right-hand side of the above expression is bounded by δ/(5√(2)) thanks to Proposition <ref>. The second term is also bounded by δ/(5√(2)) since Lemma <ref> rules out the existence of macroscopic faces. Concerning the third term, recalling (<ref>), we have that 2π/η_nł|(_n^†()) - ^†_n(_1^-)|̊ ≤|2π/η_n^†_n(_1^-) + b^_n -(_1^-)| + |2π/η_n^†_n(_k^+) + b^_n -(_k^+)| + ł|(_k^+) - (_1^-)|̊. The first and second term on the right-hand side of the above expression are bounded by δ/(5√(2)), thanks to Proposition <ref>. The third term is also bounded by δ/(5√(2)), for n> n(R, δ) thanks, once again, to Lemma <ref>. We remark that all the previous bounds obviously hold only for n>n(R, δ) large enough. Therefore, putting it all together, we obtain that, for all n> n(R, δ) large enough, it holds that |2π/η_n(^†_n()) + b^_n -()| ≤δ/√(2), ∀∈^†, n(× [-R, R]). Step 3. For each n ∈, we define the affine transformation T^†_n : × [0, 1] →^2 by letting (T^†_n ) := 2π/η_n() + b_n^ and (T^†_n ) := c_n^() + b^_n, ∀∈× [0, 1], Therefore, the previous two steps yield that, for any n > n(R, δ) large enough, it holds that _̣^2ł(T^†_n ^†_n(), )̊≤δ, ∀∈^†, n(× [-R, R]), where _̣^2 denotes the Euclidean distance in ^2. This is obviously equivalent to the desired result, and so the proof is completed. § APPLICATION TO MATED-CRT MAPS The main goal of this section is to prove Theorem <ref>. Roughly speaking, the plan is as follows. We will first introduce an a priori embedding of mated-CRT maps which is “close” to LQG. We then prove that this a priori embedding satisfies the assumptions of Theorem <ref>. Finally, we show how this allows to conclude. §.§ SLE/LQG description of mated-CRT maps We now discuss an equivalent description of mated-CRT maps in terms of SLE/LQG, which comes from the results of <cit.>. These results imply that mated-CRT maps can be realized as cell configurations constructed from space-filling_κcurves parameterized by quantum mass with respect to a certain independent LQG surface. We will not need many properties of the SLE/LQG objects involved, so we will not give detailed definitions, but we will give precise references instead. Liouville quantum gravity surfaces. Forγ∈(0, 2)andD ⊆, a doubly markedγ-LQG surface is an equivalence class of quadruplets(D, h,z_1, z_2)wherehis a random generalized function onD(which we will always take to be an instance of some variant of the Gaussian free field), andz_1,z_2 ∈D. Two such quadruplets(D, h, z_1, z_2)and(D̃, h̃, z̃_̃1̃, z̃_̃2̃)are declared to be equivalent if there is a conformal mapf : D̃ →Dsuch that h̃ = h ∘ f + Q log |f'| and f(z̃_1) = z_1, f(z̃_2) = z_2, where Q = 2/γ + γ/2. Forγ∈(0, 2), it is well-known that one can construct a random measure, called theγ-LQG area measure, which is formally given byμ_h := e^γh^̣2z, whered^2zdenotes the Lebesgue measure onD. Sincehis a random generalized function, this definition does not make rigorous sense and one should proceed using a standard regularization and limiting procedure <cit.>. Theγ-LQG area measure satisfies a certain change of coordinates formula. More precisely, given two equivalent doubly markedγ-LQG surface(D, h, z_1, z_2)and(D̃, h̃, z̃_̃1̃, z̃_̃2̃), then it holds almost surely thatμ_h(f(A)) = μ_h̃(A)for all Borel setsA ⊆D̃, wheref : D̃ →Dis a conformal map such that (<ref>) holds. In this article, we are interested in two different kind of doubly markedγ-LQG surfaces. * The doubly marked quantum sphere(, h, 0, ∞), where h is a variant of the Gaussian free field precisely defined in <cit.>. For γ∈ (0, 2), it is well-known that one can associate with the random generalized function h a random measure μ_h on , the γ-LQG measure, with μ_h() < ∞ (again, we will not need the precise definition here). Typically, one considers a unit-area quantum sphere, which means that we fix μ_h() = 1. * The 0-quantum cone(, h^c, 0, ∞), where h^c is a variant of the Gaussian free field precisely defined in <cit.>. Also in this case, for γ∈ (0, 2), we can associate to h^c the γ-LQG measure μ_h^ c which has infinite total mass, but it is locally finite. Schramm–Loewner evolution. We do not need to precisely define SLE_κ, but rather it is sufficient to know that whole-plane space-filling SLE_κ, forκ> 4, is a random space-filling curveθwhich travels from∞to∞in. It is a variant of SLE_κ<cit.> which was introduced in <cit.>. Space-filling SLE_κforκ≥8is a two-sided version of ordinary SLE_κ(which is already space-filling), whereas space-filling SLE_κforκ∈(4, 8)can be obtained from ordinary SLE_κby iteratively filling in the “bubbles” which the path disconnects from∞. Construction of the a priori embedding. An important feature of the mated-CRT map is that it comes with an a priori embedding intodescribed by SLE-decorated LQG. To explain this embedding, consider a doubly marked quantum sphere(, h, 0, ∞)and, forγ∈(0,2), consider the associatedγ-LQG measureμ_h. Sample a space-filling SLE_κcurveθwithκ= 16/γ^2, independently from the random generalized functionh, and reparametrizeθso that θ(0) = 0 and μ_hł(θ([a, b])̊ = b-a, ∀ a, b ∈ with a < b. Forγ∈(0, 2)andn ∈, we define then-structure graph^nassociated with the pair(h, θ)as follows. The vertex set of^nis given by ^n := 1/n∩ (0, 1]. Two distinct verticesx_1,x_2 ∈^nare connected by one edge (resp. two edges) if and only if the intersection of the corresponding cellsθ([x_1 - 1/n, x_1])andθ([x_2 - 1/n, x_2])has one connected component which is not a singleton (resp. two connected components which are not singletons). We refer to Figure <ref> for a diagrammatic construction of the SLE/LQG embedding of the mated-CRT map. Whenκ≥8, the intersection of cellsθ([x_1 - 1/n, x_1]) ∩θ([x_2 - 1/n, x_2])is always either empty or the union of one or two non-singleton connected components. However, whenκ∈(4,8)it is possible thatθ([x_1 - 1/n, x_1]) ∩θ([x_2 - 1/n, x_2])is a totally disconnected Cantor-like set, in which casex_1andx_2are not joint by an edge in^n(see Figure <ref>). The following result explains the connection between the pair(h, θ)and the mated-CRT map^n, and it is a consequence of <cit.>. The family of structure graphs {^n}_n ∈ agrees in law with the family of n-mated-CRT maps with the sphere topology defined in Subsection <ref>. The previous proposition gives us an a priori embedding of the mated-CRT map inby sending eachx ∈^nto the pointθ(x) ∈. Furthermore, the graph^ncomes naturally with two marked vertices. Indeed, we can letv^n_0 ∈^n(resp.v^n_1 ∈^n) to be the vertex corresponding to the cell containing0(resp.∞). We also emphasize that here all the edges in^nare assumed to have unit conductance. Construction of the a priori embedding of the dual graph. The a priori SLE/LQG embedding of^nalso induces an a priori embedding intoof the associated planar dual graph^n. Indeed, each vertex of^nis naturally identified with the set of points inwhere three of the cellsθ([x - 1/n, x])meet[Note that there cannot be more than three cells meeting at a single point or we would have a face of degree greater than three.], forx ∈^n. The set of edges of^ncan be identified with the boundary segments of the cells which connect these vertices. To be precise, this is not an embedding whenκ∈(4,8)since the edges can intersect (but not cross) each other (see Figure <ref>). To deal with this case, we can very slightly perturb the edges so that they do not intersect except at their endpoints. We refer to the proof of Proposition <ref> for more details on how one can handle this situation. §.§ Mated-CRT maps satisfy the assumptions In this subsection, we show that the a priori embedding of mated-CRT maps with the sphere topology satisfies the assumptions of Theorem <ref>. In particular, we will show here that assumptions <ref> and <ref> are satisfied in this specific case. We recall that, for eachn ∈andγ∈(0, 2), the mated-CRT map^ncomes with two marked verticesv_0^nandv_1^nwhich correspond to the cell containing0and∞, respectively For each n ∈ and γ∈ (0, 2), let ^n be the n-mated CRT map with the sphere topology embedded in under the a priori SLE/LQG embedding specified in Subsection <ref>. View the embedded random walk on ^n, stopped when it hits v_1^n, as a continuous curve in obtained by piecewise linear interpolation at constant speed. For each compact subset K ⊂ and for any z ∈ K, the conditional law given the pair (h, θ) of the random walk on ^n started from the vertex x_z^n ∈^n nearest to z weakly converges in probability as n →∞ to the law of the Brownian motion on started from z with respect to the local topology on curves viewed modulo time parameterization specified in Subsection <ref>, uniformly over all z ∈ K. To be precise, in <cit.>, the above theorem is stated for a mated-CRT map built using a quantum sphere with only one marked point. However, the quantum sphere with two marked points can be obtained from the quantum sphere with one marked point (, h, ∞) by first sampling a point z ∈ uniformly from the γ-LQG area measure μ_h, then applying a conformal map which sends z to 0 (see <cit.>). Therefore, since Brownian motion is conformally invariant, the statement in <cit.> immediately implies Proposition <ref> for the quantum sphere with two marked points. The main purpose of this subsection is to prove that an analogous result holds also on the sequence of a priori SLE/LQG embedding of the dualn-mated-CRT map with the sphere topology. More precisely, we want to prove the following proposition. For each n ∈ and γ∈ (0, 2), let ^n be the dual planar graph associated to the n-mated-CRT map ^n with the sphere topology embedded in under the a priori embedding specified in Subsection <ref>. View the embedded random walk on ^n as a continuous curve in obtained by piecewise linear interpolation at constant speed. For each compact subset K ⊂ and for any z ∈ K, the conditional law given the pair (h, θ) of the random walk on ^n started from the vertex x̂_z^n ∈^n nearest to z weakly converges as n →∞ to the law of the Brownian motion on started from z with respect to the local topology on curves viewed modulo time parameterization specified in Subsection <ref>, uniformly over all z ∈ K. §.§.§ Invariance principle on embedded lattices In order to prove Proposition <ref>, we need to take a step back and state a general theorem from <cit.> which guarantees that the random walk on certain embedded lattices has Brownian motion as a scaling limit. For the reader's convenience, we recall here the main definitions needed and in what follows we adopt notation similar to that of <cit.>. An embedded lattice is a graph embedded in in such a way that each edge is a simple curve with zero Lebesgue measure, the edges intersect only at their endpoints, and each compact subset of intersects at most finitely many edges of . As usual, we write for the set of vertices of , for the set of edges of , and for the set of faces of . For an embedded lattice and x ∈, we define the outradius of x by setting (x) := (⋃_{H ∈ : x ∈∂ H} H), i.e., the diameter of the union of the faces with x on their boundaries. Here ∂ H denotes the boundary of the face H ∈. ForC > 0andz ∈, we writeC(- z)for the embedded lattice obtained by first translating everything by the amount-zand then by scaling everything by the factorC. We are interested in random embedded lattices that satisfy the following assumptions. * (Translation invariance modulo scaling) There is a (possibly random and -dependent) increasing sequence of open sets U_j ⊂, each of which is either a square or a disk, whose union is all of such that the following is true. Conditional on and U_j, let z_j for j ∈ be sampled uniformly from the Lebesgue measure on U_j. Then the shifted lattice - z_j converge in law to modulo scaling as j →∞, i.e., there are random numbers C_j > 0 (possibly depending on and z_j) such that C_j( - z_j) → in law with respect to the metric specified in <cit.>.[Several equivalent formulations of this condition are given in <cit.>.] * (Ergodicity modulo scaling) Every real-valued measurable function F = F() which is invariant under translation and scaling, i.e., F(C( - z)) = F() for each z ∈ and C > 0, is almost surely equal to a deterministic constant. * (Finite expectation) Let H_0 ∈ be the face of containing 0, then [∑_x ∈∩∂ H_0(x)^2 (x)/(H_0)] < ∞, where, in this case, (x) denotes the number of edges incident to x. Let be an embedded lattice satisfying assumptions <ref>, <ref>, and <ref>. For > 0, view the embedded random walk on on as a continuous curve in obtained by piecewise linear interpolation at constant speed. For each compact subset K ⊂ and for any z ∈ K, the conditional law given of the random walk on started from the vertex x̂_z^∈() nearest to z weakly converges in probability as → 0 to the law of a Brownian motion on , with some deterministic, non-degenerate covariance matrix Σ, started from z with respect to the local topology on curves viewed modulo time parameterization specified in Subsection <ref>, uniformly over all z ∈ K. To be precise, the above theorem follows from the proof <cit.> (which gives the same statement but without the uniform rate of convergence on compact subsets) by using <cit.> in place of <cit.>. §.§.§ Proof of Proposition <ref> The purpose of this subsection is to transfer the general result of Proposition <ref> to the particular setting of Proposition <ref>. In order to do this, we need to proceed in two steps. First, we verify that the hypothesis of Proposition <ref> are satisfied for a sequence of embedded lattices built through the0-quantum cone. Then, we transfer the result to the sequence ofn-mated CRT maps with the sphere topology by means of an absolute continuity argument. Let(, h^c, 0, ∞)be a 0-quantum cone as specified in Subsection <ref>. We now define a graph associated with the 0-quantum cone in a way which is exactly analogous to the SLE/LQG description of mated-CRT maps with the sphere topology described in Section <ref>. Forγ∈(0, 2), letθbe a whole-plane space-filling SLE_κ, withκ= 16/γ^2, sampled independently fromh^cand then parameterized by theγ-LQG measureμ_h^cin such a way thatθ(0) = 0. Letξbe sampled uniformly from the unit interval[0, 1], independently from everything else, and let := ł{θł([x - 1, x])̊ : x ∈ + ξ}̊. The reason for considering times in+ξinstead just inis to avoid making the point0 = θ(0)special. This is needed in order to check the translation invariance modulo scaling hypothesis <ref>. We viewas a planar map whose vertex set isitself. Two verticesH,H' ∈are joined by one edge (resp. two edges) if and only ifHandH'intersect along one (resp. two) non-trivial connected boundary arc. (resp. arcs). ForH,H' ∈, withH ≠H', we writeH ∼H'if they are joined by at least one edge. Given a setA ⊂, we write (A):= ł{H ∈ : H ∩ A ≠∅}̊ and moreover, forH ∈, we write(H)for the number of cellsH' ∈(counted with edge multiplicity) such thatH ∼H'. For γ∈ (0, 2), define the cell configuration associated with a space-filling SLE_κ on a 0-quantum cone as specified above. Let be the embedded defined as follows. * The vertex set of consists of points x ∈ such that there are three cells in that meet at x and with the property that the boundary of each of such cells has a connected subset that touches x; * The edge set of consists of the boundary segments of the cells which connect the vertices. In other words, is nothing but the planar dual of . Moreover, by construction, is an embedded lattice in the sense of Definition <ref>, except that embedded edges are allowed to intersect but not cross, in the case when γ∈ (√(2), 2). To deal with this case, we can consider a different embedding of in which all the vertices occupy the same position, but the edges are slightly perturbed so that they do not intersect except at their endpoints. More precisely, for each edge with touching points in its interior, we can slightly perturb it in such a way that the modification only depends on the position of the edge itself and on the positions of the adjacent faces. In particular, this perturbation can be carried out in a translation and scaling invariant way. Therefore, since this procedure only depends on the local configuration of the lattice, if the starting cell configuration is translation invariant modulo scaling, then also the the perturbed lattice is translation invariant modulo scaling. We will now check that satisfies assumptions <ref>, <ref>, and <ref>. The translation invariance modulo scaling assumption <ref> and the ergodicity modulo scaling assumption <ref> follows from the corresponding properties for the associated cell configuration as checked in <cit.>. Therefore, we can just focus on proving the finite expectation assumption <ref>. To this end, recalling that the cell H_0 is the face of containing 0, we proceed as follows 1∑_x ∈∩∂ H_0(x)^2 (x)/(H_0) ≤ 12 ∑_x ∈∩∂ H_0∑_{H ∈ : x ∈∂ H}(H)^2/(H_0) ≤ 48 ∑_{H ∈ : H ∼ H_0}(H)^2/(H_0) + 12 (H_0)^2/(H_0)(H_0). In the first line of (<ref>), we used the fact that each vertex of has degree at most 3 and the inequality (a+b+c)^2 ≤ 4(a^2 +b^2 +c^2). In the second line, we use that each cell H ∈ with H ∼ H_0 intersects H_0 along at most two disjoint connected boundary arcs (one on its left boundary and one on its right boundary), so there are at most 4 vertices of in H ∩ H_0. Now, we notice that the second quantity in the last line of (<ref>) has finite expectation thanks to <cit.>. Therefore, we can just focus our attention to the sum appearing in the last line of (<ref>). Basically, the fact that this sum has finite expectation follows from the combination of several results in <cit.> and <cit.>. Let F = F() denote the sum appearing in the last line of (<ref>). We will show that [F] < ∞ using an ergodic theory result from <cit.>. Let {S_k}_k ∈ be the bi-infinite sequence of origin-containing squares of a uniform dyadic system independent from , as defined in <cit.>. We will not need the precise definition of these sequence here, but rather we only need to know S_k-1⊂ S_k for each k ∈ and ∪_k = 1^∞ S_k =. As shown in <cit.>, the cell configuration satisfies a suitable translation invariance modulo scaling assumption, and so, we can apply <cit.> to to find that the following is true. If we let F_z for z ∈ be defined in the same manner as F but with the translated cell configuration -z in place of , then it holds almost surely that [F] = lim_k →∞1/(S_k)∫_S_k F_z d z. To bound the right-hand side of the above expression, we can proceed as follows 1lim_k →∞1/(S_k)∫_S_k F_z d z = lim_k →∞1/(S_k)∫_S_k∑_{H ∈ : H ∼ H_z}(H)^2/(H_z) d z ≤lim sup_k →∞1/(S_k)∑_H ∈(S_k)∑_{H' ∈ : H' ∼ H}(H')^2. Since the maximal size of the cells in (S_k) is almost surely of strictly smaller order than the side length of S_k as k →∞ (see <cit.>), we find that almost surely for large enough k ∈, each H' ∈ with H' ∼ H for some H ∈(S_k) is contained in (S_k(1)), where S_k(1) is the square with the same center as S_k and three times the side length of S_k. Therefore, the last line in (<ref>) is almost surely bounded above by 2 lim sup_k →∞1/(S_k)∑_H ∈(S_k(1))(H)^2 (H), which is finite by <cit.>. Therefore, summing up, we proved that the embedded lattice satisfies assumptions <ref>, <ref>, <ref>, ad so, we can apply Proposition <ref>. Furthermore, we notice that, thanks to the rotational invariance of the law of (h^c, θ), the limiting covariance matrix Σ is given by a positive scalar multiple of the identity matrix. Hence, in order to conclude the proof of Proposition <ref>, we need to transfer the result to the setting of mated-CRT maps with the sphere topology. This can be done by means of absolute continuity arguments as in <cit.>. For the sake of brevity, we will not repeat such arguments here and we refer to <cit.>. §.§ Convergence to LQG In this subsection we see how the proof of Theorem <ref> is almost an immediate consequence of the results proved above. More specifically, forn ∈andγ∈(0, 2), let(^n, v_0^n, v_1^n)be the doubly markedn-mated-CRT map with the sphere topology under the a priori SLE/LQG embedding as specified in Subsection <ref>. We observe that there exists a conformal map fromto_2 πsending∞↦+∞and0 ↦-∞. This mapping is unique up to horizontal translation and rotation. The horizontal translation can be fixed by specifying that the volume of/2π×[0, ∞)under theγ-LQG measureμ_hinduced by the embedding is precisely1/2. Furthermore, the rotation on/2πcan be choosen uniformly at random. Therefore, using this conformal map, we can define an embedding of(^n, v^n_0, v^n_1)into_2πin such a way thatv_0^nandv_1^nare mapped to-∞and+∞, respectively. Using the same conformal map, we can also embedded the associated dual graph^ninto_2π. This put us exactly in the setting of Theorem <ref> from which we can deduce the following result. Fix γ∈ (0, 2) and let {(^n, v_0^n, v_1^n)}_n ∈ be the sequence of doubly marked n-mated CRT map with the sphere topology embedded in the infinite cylinder _2π as specified above. For each n ∈, let _n : ^n→_2π be the Smith embedding associated to (^n, v_0^n, v_1^n) as specified in Definition <ref>. There exists a sequence of random affine transformations {T_n}_n ∈ from _η_n to _2π of the form specified in the statement of Theorem <ref> such that, for all compact sets K ⊂_2π, the following convergence holds in probability lim_n →∞sup_x ∈^n(K)_̣2πł(T_n _n(x), x)̊ = 0, where _̣2π denotes the Euclidean distance on the cylinder _2π. By construction the sequence {(^n, v_0^n, v_1^n)}_n ∈ and the associated sequence of dual graphs {^n}_n ∈ satisfy almost surely assumption <ref>. Moreover, Proposition <ref> guarantees the convergence in probability of the random walk on the sequence of primal graphs to Brownian motion and so assumption <ref> is satisfied. Furthermore, Proposition <ref> guarantees the convergence in probability of the random walk on the sequence of dual graphs to Brownian motion and so also assumption <ref> is satisfied. Therefore, the desired result follows from Theorem <ref>. Using the same procedure specified at the beginning of this subsection, we can also consider the parameterization of the unit-area quantum sphere by the infinite cylinder_2π. Hence, with a slight abuse of notation, we let(_2π, h, -∞, +∞)be the unit-area quantum sphere parametrized by_2πand we denote byμ_hthe associatedγ-LQG measure. We are now ready to prove Theorem <ref>. The result on the measure convergence <ref> follows from Proposition <ref> and the fact that θ is parameterized by μ_h-mass. The uniform convergence statement for curves <ref> is also an immediate consequence of Proposition <ref>. The claimed random walk convergence <ref> follows from Propositions <ref> and <ref>. § SOME STANDARD ESTIMATES FOR PLANAR BROWNIAN MOTION Throughout the whole appendix,Bdenotes a standard planar Brownian motion. For∈^2, we use the notationB^to denote a planar Brownian motion started from. We recall thatσ_2π : ^2 →_2 πis defined in (<ref>) and denotes the covering map of the infinite cylinder_2 π. In particular, ifB^is a as above, thenσ_2π(B^)is a Brownian motion on_2 πstarted fromσ_2 π(). Fix R' > 1. For ∈^2, consider the following stopping times τ := infł{t ≥ 0 : (B_t^) = - 2R' or (B_t^) = R'}̊. Then it holds that _ł(σ_2 π(B|_[0, τ]) does not wind around the cylinder below height - R')̊≲1/R', ∀∈×{-R'}, where the implicit constant is independent of everything else. Fix ∈×{-R'}. Let σ_0 : = 0, and for k ∈_0 we define inductively τ_k := infł{t ≥σ_k : (B^_t) = -R'-1}, σ_k+1 := infł{t ≥τ_k : (B^_t) = -R' }̊. Moreover, for k ∈_0 consider the events A_k := ł{|(B^_σ_k) - (B^_τ_k)| ≥ 1}̊ , F_k := ł{τ_k > τ^+}̊. Let K be the smallest k ∈_0 such that F_k occurs. Then the probability that the event in the lemma statement happens is less or equal to the probability that none of the events {A_k}_k ∈ [K]_0 happen. Thanks to the strong Markov property of the Brownian motion, the events {A_k}_k ∈_0 are independent and identically distributed. Moreover, thanks to well-known properties of Brownian motion, the event A_0 happens with uniformly positive probability p independent of R' and . Therefore, we obtain that _(⋂_i = 0^KA̅_̅k̅) = ∑_k ∈_0_(⋂_i = 0^k A̅_̅k̅) _(K=k) = ∑_k ∈_0 (1-p)^k_(K = k) ≲1/R', for all ∈×{R'}, where the implicit constant is independent of everything else. Hence, this concludes the proof. Fix R > 1, R' > R. For ∈× [-R, R], let τ :=inf{t ≥ 0 : |(B^_t)| = R'}, and define the events 1 W^+ := ł{σ_2 π (B^|_[0, τ]) does a loop around the cylinder between heights R and R'}̊, W^- := ł{σ_2 π (B^|_[0, τ]) does a loop around the cylinder between heights -R' and -R }̊. Then it holds that _ł(W̅^̅+̅∪W̅^̅-̅)̊≲R/R', ∀∈× [-R, R], where the implicit constant is independent of everything else. Fix ∈× [-R, R]. It is sufficient to prove that _(W̅^̅+̅) ≲ R/R', and the same with W^- in place of W^+. We will proceed similarly to the proof of the previous lemma. Let σ_0 : = inf{t ≥ 0 : (B^_t) = R} and, for k ∈_0 we define inductively τ_k := infł{t ≥σ_k : (B^_t) = R+1}̊, σ_k+1 := infł{t ≥τ_k : (B^_t) = R}̊. Moreover, for k ∈_0 consider the events A_k := ł{|(B^_σ_k) - (B^_τ_k)| = 1 , (B^_σ_k) = (B^_τ_k)}̊, F_k := ł{τ_k > τ}̊ . Let K be the smallest k ∈_0 such that F_k occurs. Then the probability that the event W^+ does not happen is less or equal to the probability that none of the events {A_k}_k ∈ [K]_0 happen. Thanks to the strong Markov property of the Brownian motion, the events {A_k}_k ∈_0 are independent and identically distributed. Moreover, since the event A_0 happens with uniformly positive probability p independent of R and , we have that _ł(W̅^̅+̅)̊≤_(⋂_i = 0^KA̅_̅i̅) = ∑_k ∈_0_(⋂_i = 0^kA̅_̅i̅) _(K = k) = ∑_k ∈_0 (1-p)^k_(K = k) ≲R/R', for all ∈× [-R, R], where the implicit constant is independent of everything else. Proceeding in a similar way, one can also prove that _(W̅^̅-̅) ≲ R/R'. Therefore, the desired result follows from the fact that _(W̅^̅+̅∪W̅^̅-̅) ≤_(W̅^̅+̅) + _(W̅^̅-̅). 'DKRV16#1#2#2#1[AB99]AB99M. Aizenman and A. Burchard. Hölder regularity and dimension bounds for random curves. Duke Math. J.99, no. 3, (1999), 419–453. doi:10.1215/S0012-7094-99-09914-3https://dx.doi.org/10.1215/S0012-7094-99-09914-3. [ABL16]Berry_squareL. Addario-Berry and N. Leavitt. Random infinite squarings of rectangles. Ann. Inst. Henri Poincaré Probab. Stat.52, no. 2, (2016), 596–611. doi:10.1214/14-AIHP661https://dx.doi.org/10.1214/14-AIHP661. [Ber15]Ber_LBMN. Berestycki. Diffusion in planar Liouville quantum gravity. Ann. Inst. Henri Poincaré Probab. Stat.51, no. 3, (2015), 947–964. doi:10.1214/14-AIHP605https://dx.doi.org/10.1214/14-AIHP605. [Ber17]Berestycki_ElementaryN. Berestycki. An elementary approach to Gaussian multiplicative chaos. Electron. Commun. Probab.22, (2017), Paper No. 27, 12. doi:10.1214/17-ECP58https://dx.doi.org/10.1214/17-ECP58. [Ber23]BerMultiF. Bertacco. Multifractal analysis of Gaussian multiplicative chaos and applications. Electron. J. Probab.28, (2023), Paper No. 2, 36. doi:10.1214/22-ejp893https://dx.doi.org/10.1214/22-ejp893. [BG22]GB_LBMN. Berestycki and E. Gwynne. Random walks on mated-CRT planar maps and Liouville Brownian motion. Comm. Math. Phys.395, no. 2, (2022), 773–857. doi:10.1007/s00220-022-04482-yhttps://dx.doi.org/10.1007/s00220-022-04482-y. [BP21]BerPowN. Berestycki and E. Powell. Gaussian free field, Liouville quantum gravity and Gaussian multiplicative chaos, 2021. https://homepage.univie.ac.at/nathanael.berestycki/wp-content/uploads/2022/05/master.pdfhttps://homepage.univie.ac.at/nathanael.berestycki/wp-content/uploads/2022/05/master.pdf. [BS96]BS96I. Benjamini and O. Schramm. Random walks and harmonic functions on infinite planar graphs using square tilings. Ann. Probab.24, no. 3, (1996), 1219–1238. doi:10.1214/aop/1065725179https://dx.doi.org/10.1214/aop/1065725179. [BSST40]BSST40R. L. Brooks, C. A. B. Smith, A. H. Stone, and W. T. Tutte. The dissection of rectangles into squares. Duke Math. J.7, (1940), 312–340. doi:10.1215/S0012-7094-40-00718-9https://dx.doi.org/10.1215/S0012-7094-40-00718-9. [CG20]Carm_SmithJ. Carmesin and A. Georgakopoulos. Every planar graph with the Liouville property is amenable. Random Structures Algorithms57, no. 3, (2020), 706–729. doi:10.1002/rsa.20936https://dx.doi.org/10.1002/rsa.20936. [DDDF20]DDDF_firstJ. Ding, J. Dubédat, A. Dunlap, and H. Falconet. Tightness of Liouville first passage percolation for γ∈ (0,2). Publ. Math. Inst. Hautes Études Sci.132, (2020), 353–403. doi:10.1007/s10240-020-00121-1https://dx.doi.org/10.1007/s10240-020-00121-1. [DFG+20]Weak_MetricJ. Dubédat, H. Falconet, E. Gwynne, J. Pfeffer, and X. Sun. Weak LQG metrics and Liouville first passage percolation. Probab. Theory Related Fields178, no. 1-2, (2020), 369–436. doi:10.1007/s00440-020-00979-6https://dx.doi.org/10.1007/s00440-020-00979-6. [DKRV16]LQG_SphereF. David, A. Kupiainen, R. Rhodes, and V. Vargas. Liouville quantum gravity on the Riemann sphere. Comm. Math. Phys.342, no. 3, (2016), 869–907. doi:10.1007/s00220-016-2572-4https://dx.doi.org/10.1007/s00220-016-2572-4. [DMS21]DMS21B. Duplantier, J. Miller, and S. Sheffield. Liouville quantum gravity as a mating of trees. Astérisque , no. 427, (2021), viii+257. doi:10.24033/asthttps://dx.doi.org/10.24033/ast. [DS11]DS11B. Duplantier and S. Sheffield. Liouville quantum gravity and KPZ. Invent. Math.185, no. 2, (2011), 333–393. doi:10.1007/s00222-010-0308-1https://dx.doi.org/10.1007/s00222-010-0308-1. [Geo16]Geo16A. Georgakopoulos. The boundary of a square tiling of a graph coincides with the Poisson boundary. Invent. Math.203, no. 3, (2016), 773–821. doi:10.1007/s00222-015-0601-0https://dx.doi.org/10.1007/s00222-015-0601-0. [GGJN19]Nach_CircleO. Gurel-Gurevich, D. C. Jerison, and A. Nachmias. A combinatorial criterion for macroscopic circles in planar triangulations. arXiv (2019). arXiv:1906.01612https://arxiv.org/abs/1906.01612. [GHS19]GHS_matedE. Gwynne, N. Holden, and X. Sun. A distance exponent for Liouville quantum gravity. Probab. Theory Related Fields173, no. 3-4, (2019), 931–997. doi:10.1007/s00440-018-0846-9https://dx.doi.org/10.1007/s00440-018-0846-9. [GM20]GM_ConfE. Gwynne and J. Miller. Confluence of geodesics in Liouville quantum gravity for γ∈ (0,2). Ann. Probab.48, no. 4, (2020), 1861–1901. doi:10.1214/19-AOP1409https://dx.doi.org/10.1214/19-AOP1409. [GM21]GM_UniqE. Gwynne and J. Miller. Existence and uniqueness of the Liouville quantum gravity metric for γ∈(0,2). Invent. Math.223, no. 1, (2021), 213–333. doi:10.1007/s00222-020-00991-6https://dx.doi.org/10.1007/s00222-020-00991-6. [GMS19]GMS_HarmonicE. Gwynne, J. Miller, and S. Sheffield. Harmonic functions on mated-CRT maps. Electron. J. Probab.24, (2019), Paper No. 58, 55. doi:10.1214/19-EJP325https://dx.doi.org/10.1214/19-EJP325. [GMS20]GMS_VoronoiE. Gwynne, J. Miller, and S. Sheffield. The Tutte embedding of the Poisson-Voronoi tessellation of the Brownian disk converges to √(8/3)-Liouville quantum gravity. Comm. Math. Phys.374, no. 2, (2020), 735–784. doi:10.1007/s00220-019-03610-5https://dx.doi.org/10.1007/s00220-019-03610-5. [GMS21]GMS_TutteE. Gwynne, J. Miller, and S. Sheffield. The Tutte embedding of the mated-CRT map converges to Liouville quantum gravity. Ann. Probab.49, no. 4, (2021), 1677–1717. doi:10.1214/20-aop1487https://dx.doi.org/10.1214/20-aop1487. [GMS22]GMS22E. Gwynne, J. Miller, and S. Sheffield. An invariance principle for ergodic scale-free random environments. Acta Math.228, no. 2, (2022), 303–384. doi:10.4310/ACTA.2022.v228.n2.a2https://dx.doi.org/10.4310/ACTA.2022.v228.n2.a2. [GP20]GP_squareA. Georgakopoulos and C. Panagiotis. Convergence of square tilings to the Riemann map. arXiv (2020). arXiv:1910.06886https://arxiv.org/abs/1910.06886. [GRV16]GRV_LBMC. Garban, R. Rhodes, and V. Vargas. Liouville Brownian motion. Ann. Probab.44, no. 4, (2016), 3076–3110. doi:10.1214/15-AOP1042https://dx.doi.org/10.1214/15-AOP1042. [HP17]Hutch_SmithT. Hutchcroft and Y. Peres. Boundaries of planar graphs: a unified approach. Electron. J. Probab.22, (2017), Paper No. 100, 20. doi:10.1214/17-EJP116https://dx.doi.org/10.1214/17-EJP116. [HS21]HS_CardyN. Holden and X. Sun. Convergence of uniform triangulations under the Cardy embedding. arXiv (2021). arXiv:1905.13207https://arxiv.org/abs/1905.13207. [Kah85]KahaneJ.-P. Kahane. Sur le chaos multiplicatif. Ann. Sci. Math. Québec9, no. 2, (1985), 105–150. [LP16]LP16R. Lyons and Y. Peres. Probability on trees and networks, vol. 42 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, New York, 2016, xv+699. doi:10.1017/9781316672815https://dx.doi.org/10.1017/9781316672815. [MS17]IM_fourJ. Miller and S. Sheffield. Imaginary geometry IV: interior rays, whole-plane reversibility, and space-filling trees. Probab. Theory Related Fields169, no. 3-4, (2017), 729–869. doi:10.1007/s00440-017-0780-2https://dx.doi.org/10.1007/s00440-017-0780-2. [Nac20]Nac20A. Nachmias. Planar maps, random walks and circle packing, vol. 2243 of Lecture Notes in Mathematics. Springer, Cham, 2020, xii+118. École d'été de probabilités de Saint-Flour XLVIII—2018, École d'Été de Probabilités de Saint-Flour. [Saint-Flour Probability Summer School]. doi:10.1007/978-3-030-27968-4https://dx.doi.org/10.1007/978-3-030-27968-4. [Pol81]Pol81A. M. Polyakov. Quantum geometry of bosonic strings. Phys. Lett. B103, no. 3, (1981), 207–210. doi:10.1016/0370-2693(81)90743-7https://dx.doi.org/10.1016/0370-2693(81)90743-7. [RS87]RS87B. Rodin and D. Sullivan. The convergence of circle packings to the Riemann mapping. J. Differential Geom.26, no. 2, (1987), 349–360. doi:10.4310/jdg/1214441375https://dx.doi.org/10.4310/jdg/1214441375. [RV11]RV_KPZR. Rhodes and V. Vargas. KPZ formula for log-infinitely divisible multifractal random measures. ESAIM Probab. Stat.15, (2011), 358–371. doi:10.1051/ps/2010007https://dx.doi.org/10.1051/ps/2010007. [RV14]RVReviewR. Rhodes and V. Vargas. Gaussian multiplicative chaos and applications: a review. Probab. Surv.11, (2014), 315–392. doi:10.1214/13-PS218https://dx.doi.org/10.1214/13-PS218. [Sch00]Sh00O. Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. Israel J. Math.118, (2000), 221–288. doi:10.1007/BF02803524https://dx.doi.org/10.1007/BF02803524. [She16]She_ZipperS. Sheffield. Conformal weldings of random surfaces: SLE and the quantum gravity zipper. Ann. Probab.44, no. 5, (2016), 3474–3545. doi:10.1214/15-AOP1055https://dx.doi.org/10.1214/15-AOP1055.
http://arxiv.org/abs/2306.11352v1
20230620074343
Chemical Mapping of Excitons in Halide Double Perovskites
[ "Raisa-Ioana Biega", "Yinan Chen", "Marina R. Filip", "Linn Leppert" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall", "physics.chem-ph" ]
]Chemical Mapping of Excitons in Halide Double Perovskites MESA+ Institute for Nanotechnology, University of Twente, 7500 AE Enschede, The Netherlands Department of Physics, University of Oxford, Clarendon Laboratory, Oxford, OX1 3PU, United Kingdom [email protected] Department of Physics, University of Oxford, Clarendon Laboratory, Oxford, OX1 3PU, United Kingdom [email protected] MESA+ Institute for Nanotechnology, University of Twente, 7500 AE Enschede, The Netherlands Halide double perovskites are an emerging class of semiconductors with tremendous chemical and electronic diversity. While their bandstructure features can be understood from frontier-orbital models, chemical intuition for optical excitations remains incomplete. Here, we use ab initio many-body perturbation theory within the GW and the Bethe-Salpeter Equation approach to calculate excited-state properties of a representative range of Cs_2BB'Cl_6 double perovskites. Our calculations reveal that double perovskites with different combinations of B and B' cations display a broad variety of electronic bandstructures and dielectric properties, and form excitons with binding energies ranging over several orders of magnitude. We correlate these properties with the orbital-induced anisotropy of charge-carrier effective masses and the long-range behavior of the dielectric function, by comparing with the canonical conditions of the Wannier-Mott model. Furthermore, we derive chemically intuitive rules for predicting the nature of excitons in halide double perovskites using electronic structure information obtained from computationally inexpensive DFT calculations. [ Linn Leppert today ================ Halide double perovskites, also known as elpasolites, are a class of materials, with the general formula A_2BB'X_6, where A is a monovalent cation such as Cs^+, balancing the charge of corner-connected BX_6 and B'X_6 metal-halide octahedra. Discovered first in the end of the 19th century in the Rocky Mountains <cit.>, these stable, non-toxic, and earth-abundant semiconductors were initially studied as materials for radiation detection <cit.>. More recently, halide double perovskites showcased their potential in a range of applications, including photovoltaics <cit.>, X-ray detection <cit.>, scintillation <cit.>, white light emission <cit.>, and photocatalysis <cit.>. This is in large part due to the tremendous chemical and structural diversity of this material class <cit.>, that can be achieved by chemical substitution at the B, B', and X sites <cit.>. Understanding optical excitations in halide double perovskites is crucial for designing efficient and stable optoelectronic applications <cit.>. In particular the binding energy of photoexcited electron-hole pairs (excitons) is a useful parameter to determine in studies of charge-carrier transport and recombination, and is thus key for device performance and design. Experimentally, exciton binding energies of halide perovskites have been extracted from optical absorption measurements either by fitting spectra using Elliott's theory <cit.> or by measuring optical absorption spectra under high magnetic fields <cit.>. These methods generally assume that excitons obey the Wannier-Mott (or hydrogenic) model, which in 3D yields the following expression for the energies of the bound exciton states: E_n = -μ/ε^2·1/n^2 (in atomic units), with the binding energy defined as E_B=-E_1. The hydrogenic model has been used to understand the photophysics of a wide range of materials, from Pb-based halide perovskites <cit.> to MoS_2 and other layered materials <cit.>. Fundamentally, the hydrogenic model relies on two main assumptions, that electronic bands must be isotropic and parabolic, and that the dielectric screening of the electron-hole interaction must be uniform (described by the constant relative permittivity) <cit.>. The degree to which complex heterogeneous semiconductors abide by these tenets determines how accurate the hydrogenic picture is in describing excitons in a material, or (as we denote herein) how `hydrogenic' excitons are in a material. First-principles many-body perturbation theory within the GW approximation <cit.> and the Bethe-Salpeter Equation <cit.> (BSE) approach has played a particularly important role in quantitatively predicting the electronic and excited-state structure of halide perovskites. In particular, comparison of GW+BSE calculations with the Wannier-Mott model has demonstrated the hydrogenic nature of excitons in Pb-based halide perovskites <cit.>. In contrast, we and others recently showed that excitons in the double perovskite family Cs_2AgBX_6 (B=Bi, Sb and X=Br, Cl) <cit.> exhibit resonant excitons with binding energies between 170 and 450 meV which are strongly localized, with fine-structure features that differ from those computed using the hydrogenic model <cit.>. We assigned the non-hydrogenic character of excitons in these halide double perovskites to their chemical heterogeneity giving rise to an anisotropic electronic structure and dielectric screening <cit.>. In Ref. , the GW+BSE approach was used to compute the lowest bound excitonic states in Cs_2AgInCl_6 which correspond to dipole-forbidden ('dark') optical transitions, readily assigned to the symmetry of the single-particle electronic states at the conduction and valence band edges <cit.>. The lowest of these dark excitons was reported to have a binding energy of 250 meV, but its hydrogenic nature was not investigated. However, Ref.  used the Wannier-Mott model to calculate the exciton binding energy of Cs_2AgInCl_6 and found a value of 215 meV, similar to the first-principles result reported in Ref. . For other halide double perovskites with different combinations of metal sites, optoelectronic properties and exciton binding energies were shown to vary significantly too <cit.>. The picture that emerges from these reports suggests a rich landscape of excitons in halide double perovskites and calls for a systematic mapping of this landscape using first-principles calculations. In this letter, we use the GW+BSE approach to develop a holistic understanding of how the electronic structure of the alternating B- and B'-site cations influences the nature of excitons in halide double perovskites. By studying a representative set of halide double perovskites Cs_2BB'Cl_6, we show that exciton binding energies can be tuned by several orders of magnitude through chemical substitution at the B and B' sites. Furthermore, we demonstrate that direct band gap halide double perovskites with isotropic, parabolic band edges and small local field effects in their dielectric screening, feature delocalized hydrogenic excitons similar to their Pb-based single perovskite congeners. However, the absorption spectra of these materials depend considerably on the symmetry of the band edges, and can deviate significantly from expectations prescribed by canonical models. Among the heterogeneous double perovskites we study systematically, we find that some (but not all) exhibit an exciton fine structure which is well described by the hydrogenic model. However, the extent to which excitons present as non-hydrogenic depends strongly on the electronic structure of the alternating B and B' metals. Recently, Ref.  showed that the fully inorganic Pb-based halide perovskites feature highly delocalized hydrogenic excitons. Here we use the cubic phase of CsPbCl_3 (referred to as Pb in the following) as a prototypical case of a direct band gap single perovskite, which we compare to seven representative cubic double perovskites A_2BB'X_6 with A = Cs^+ and X = Cl^- (denoted by B/B' in the following). Our goal is to identify how the electronic structure of the B- and B'-site cations determines the hydrogenic nature of excitons in halide double perovskites. To this end, we explore double perovskites featuring metals from across the periodic table (Figure <ref>(a)): In/Bi, which is isoelectronic to Pb; Ag/In and Na/In which feature a direct band gap, and large band dispersion, but a distinctly different band edge orbital character than In/Bi; and Ag/Bi, Ag/Sb, Na/Bi and K/Bi with an indirect band gap and low-dispersion band edges. With the exception of In/Bi, these double perovskites have all been synthesized and experimentally characterized <cit.>. The Na- and K-based compounds have experimentally been studied as favorable host structures for luminescent centers such as Mn^2+ and Sb^3+ <cit.>. However, to the best of our knowledge, we are the first to perform state-of-the-art GW+BSE calculations for these materials and report their exciton binding energies. We start by calculating the quasiparticle (QP) band structure, absorption spectra and exciton binding energies of all eight materials using the GW+BSE approach as implemented in the BerkeleyGW code <cit.> (see the Supporting Information (SI) and Figures S1 – S3 for further computational details and convergence studies). Figure <ref>(b) shows the exciton binding energy from first-principles calculations (BSE) and estimated according to the Wannier-Mott fine structure (WM). BSE exciton binding energies (E_BSE) are calculated as the difference between the energy of the first excited state and the direct QP band gap. Here, and unless otherwise noted, we quantify agreement with the Wannier-Mott model by computing the energy difference E^1s-2s = 4/3·(E_2s-E_1s), i.e., from the difference between the G_0W_0+BSE excitation energies of the 1s (E_1s) and 2s (E_2s) states, respectively (see SI and Figure S4 for exciton fine structures and assignment of the 1s and 2 states). Table <ref> reports the QP band gaps and exciton binding energies of all eight materials. Figure <ref>(b) and Table <ref> allow for several observations: First, our selected double perovskites span a wide range of QP band gaps between ∼1 and 5 eV, which are inversely proportional to their dielectric constants ε_∞ (see Figure S5). Consequently, the exciton binding energies of these compounds differ by several orders of magnitude. Depending on B and B' site occupation, E_BSE ranges from 16 meV (In/Bi) to ∼2 eV (K/Bi). However, as shown in Figure S6, E_BSE does not scale linearly with 1/ε_∞^2, suggesting that the Wannier-Mott model performs poorly for a subset of double perovskites. Indeed, the first-principles exciton binding energies of Ag/Bi, Ag/Sb, Na/Bi, and K/Bi deviate by several hundred meV from the Wannier-Mott fine structure. In contrast, and despite their seemingly similar degree of chemical heterogeneity, Ag/In, In/Bi, and Na/In feature hydrogenic excitons, similar to the single perovskite Pb <cit.>. We therefore separate the studied double perovskites in two groups: materials with hydrogenic (Pb, Bi/In, Ag/In, Na/In) and materials with non-hydrogenic (Ag/Bi, Ag/Sb, Na/Bi, K/Bi) exciton fine structures. Notably, we find that Δ_WM = E_BSE - E^1s-2s, does not necessarily increase with increasing exciton binding energy. In other words, the magnitude of the exciton binding energy does not necessarily explain the observed non-hydrogenic fine structure. For example, Na/In features a hydrogenic 1s exciton with a very high binding energy of ∼600 meV. Not only the exciton binding energies, but also the absorption spectra of these eight materials differ significantly. In Figure <ref>(c), (e) and (d), (f) we show the calculated linear optical absorption spectra of representative double perovskites with hydrogenic and non-hydrogenic excitons, respectively. The absorption spectra of all other materials can be found in Figure S7 of the SI. Inclusion of electron-hole interactions red-shifts the absorption spectra as compared to the independent-particle picture. The absorption spectrum of In/Bi exhibits a distinct excitonic feature at the absorption onset, which appears suppressed in comparison to the excitonic feature of Pb. This is due to the significantly lower band gap and exciton binding energy of this material, and the arbitrary broadening of 50 meV chosen throughout. In agreement with previous results <cit.>, we observe that Ag/In and Na/In have a weak absorption onset and do not exhibit a signature excitonic peak. The absorption coefficient is also one order of magnitude lower than that of the other materials. This is in line with the dipole-forbidden transitions between the valence and conduction band edges <cit.> of these materials. In contrast, all four materials with non-hydrogenic excitons feature one or several distinct excitonic peaks at the onset of absorption. In all materials studied, the lowest excited state is dipole forbidden. For Ag/In and Na/In, in which SOC does not affect the valence and conduction band edges, this is a result of the angular-momentum symmetry of the atomic orbitals contributing to the valence and conduction band edges <cit.>. For the other six materials, the CBM is originating from Bi, Sb or Pb p orbitals, and the inclusion of SOC leads to a splitting of the four degenerate excited states arising from VBM to CBM transitions into a lowest-energy dark and three (nearly) degenerate bright states with oscillator strengths consistent with selection rules <cit.>. Having established these subsets of materials with hydrogenic and non-hydrogenic excitons, we continue by probing to which degree the main assumptions of the Wannier-Mott model – isotropic, parabolic band edges and a uniform, isotropic dielectric constant – are fulfilled for these materials. We start by assessing the dispersion of their conduction and valence band edges by calculating the effective electron and hole masses (a complete overview of DFT and G_0W_0 band gaps and bandstructures can be found in Table S3 and Figures S8 and S9). The effective electron and hole masses along the principal axis of the effective mass tensor of all materials can be identified as longitudinal and transverse effective masses (Table <ref>), similar to other fcc semiconductors such as Si and GaAs <cit.>. We describe our procedure for calculating and defining longitudinal and transverse effective masses in the SI and show the valence and conduction band edges along those directions in Figure <ref>(a) and (b), for In/Bi and Ag/Bi, representative for double perovskites with hydrogenic and non-hydrogenic excitons, respectively. Inspection of the energy band dispersion around the high symmetry point of the lowest-energy direct transition – Γ [0, 0, 0] for In/Bi and X [0, 1, 0]2π/a for Ag/Bi – demonstrates that the band edges of In/Bi are isotropic around Γ, while those of Ag/Bi are highly anisotropic around X, featuring different curvatures in the longitudinal and transverse directions. Further analysis reveals that the effective mass anisotropy factor λ_μ=(μ_⊥/μ_∥)^1/3 is close to 1 for the double perovskites with hydrogenic excitons and exactly 1 for Pb. In Figure <ref>(c) we show Δ_WM as a function of the quantity Λ = 1-λ_μ, where Λ=0 corresponds to a fully isotropic material (e.g., Pb). This analysis shows that the relative deviation from the Wannier-Mott model scales almost linearly with the degree of anisotropy. The materials with hydrogenic excitons (red squares) are mostly isotropic and feature a deviation of no more than 14 % from the Wannier-Mott model. In contrast, the double perovskites with non-hydrogenic excitons (blue dots) show a significantly higher degree of anisotropy and a large deviation from the Wannier-Mott model. We note that accounting for the effective mass anisotropy in the Wannier-Mott model following Ref.  reduces Δ_WM by 14% to 30% for the materials with non-hydrogenic excitons, but does not fully account for the observed deviations (see Table S4). Next, we probe the uniformity and isotropy of the dielectric screening, i.e., local field effects of the dielectric function, in two ways. First, we compute linear absorption spectra (i.e. the imaginary part of the dielectric function), by modifying the real part of the dielectric function such that [ϵ(𝐫, 𝐫'; ω)] = ε_∞. We find that optical absorption spectra and exciton binding energies change significantly only for those perovskites in which excitons do not display hydrogenic behavior (see Figure S7). Second, we analyze the spatial dependence of the head, i.e., the 𝐆=𝐆'=0 component, of the dielectric function. Figure <ref>(a) and Figure S10 show that we can fit this spatial dependence with the model dielectric function, [ϵ(q,0)] = 1+[ (ε_∞-1)^-1+α(q/q_TF)^2 ] ^-1, with α and the Thomas-Fermi wave vector, q_TF as fitting parameters tabulated in Table S5 and ε_∞ the RPA dielectric constant <cit.>. We then compare the length scale of dielectric screening – quantified by q_TF – with the length scale of the excitonic wave function of the first excited state k_x in reciprocal space, by calculating the screening length parameter l=q_TF/k_x (Table <ref>). For this, we define k_x such that it includes 99 % of the exciton probability density of the first excited state. Figure <ref>(b) shows that Δ_WM decreases as 1/l^2. Perovskites with hydrogenic excitons feature large screening-length parameters (l ≥ 11), corresponding to excitons highly localized in reciprocal space, for which the dielectric screening can be assumed to be uniform and constant. In contrast, for the subset of materials with non-hydrogenic excitons, the variation of the dielectric constant is significant on the length scale of their excitons, which are strongly delocalized in reciprocal space. Deviations from the Wannier-Mott model in these materials are also apparent by comparing the excitonic wave functions computed with G_0W_0+BSE with the radial probability density of hydrogenic 1s and 2s excitonic wavefunctions, shown in Figure <ref>(c) and (d) for In/Bi and Ag/Bi, respectively (see Figure S11 for the other materials). Finally, we return to our goal of mapping the character of excitons in halide double perovskites to the band edge electronic structure obtained from computationally inexpensive DFT calculations or derived from chemical intuition (instead of a full solution of the BSE). We note in passing that the effective mass anisotropy can be approximated using DFT effective masses (see Table S3). Chemical intuition for the reciprocal space location, parity, and dispersion of the VBM and CBM can be obtained based on knowledge of the metal-orbital character of the band edges of halide perovskites, as shown in Ref.  using linear combinations of atomic orbitals and symmetry arguments. Table <ref> shows the calculated B-site orbital character and high-symmetry k-point of the VBM and CBM of all eight perovskites, in agreement with previous predictions <cit.>, and lists the B-site orbital character at the lowest direct transition from which excitons are derived. We observe that in all materials with non-hydrogenic excitons, the B-site orbital contributions to the band edges lead to an indirect band gap. The lowest direct transition in these materials occurs at the Brillouin zone boundaries (X in Ag/Bi and Ag/Sb, and L in K/Bi and Na/Bi). Furthermore, in all those materials the conduction band edge at the lowest direct transition is relatively flat along the transverse directions which is a consequence of small B and B' site orbital overlap. In Ag/Bi and Ag/Sb, there is no Ag s character at the X points due to orbital symmetry. In Na/Bi and K/Bi, Na s and K s contribute to the conduction band edge at L but their overlap with the neighboring Bi p orbitals is small leading to high effective masses in the transverse direction. Materials with hydrogenic excitons have in common that there is a symmetry match of the B- and B'- site orbitals leading to a direct band gap at Γ with isotropic effective masses in valence and conduction band. Note that a direct gap can also arise in a double perovskite in which only one or none of the metal sites contributes to the band edges <cit.>. In Na/In, the valence band edge is rather flat because it does not feature metal-site orbital contributions. However, the conduction band minimum – with contributions from Na s and In s – is at Γ, disperse and isotropic. Our predictions can also be extended to double perovskites with one B-site only, such as the vacancy-ordered perovskites with chemical formula Cs_2BX_6. GW+BSE calculations by Cucco et al. for Cs_2TeBr_6 and Kavanagh et al. for Cs_2TiX_6 (X=I, Br, Cl) indicate highly localized excitons ill-described by the Wannier-Mott model for these materials <cit.>. Cs_2SnX_6, on the other hand, features delocalized hydrogenic excitons, as reported in both Ref.  and . Using Cs_2TeBr_6, Cs_2TiBr_6, and Cs_2SnBr_6 as examples, we calculated the orbital character of the band edges and effective mass anisotropy using DFT-PBE+SOC (see Table S6). The band structure of Cs_2TeBr_6 is reminiscent of that of Ag/Bi and Ag/Sb with Te 6s and 6p contributions leading to an indirect band gap. The lowest direct transition is at L, where the conduction band is derived from Te p orbitals alone and therefore anisotropic. Cs_2TiBr_6 is another indirect-gap semiconductor. Its relatively flat and highly anisotropic conduction band is derived from localized Ti d states and features weak Ti d–X p mixing. Cs_2SnBr_6 on the other hand is comparable to Na/In with a direct band gap at Γ and an isotropic conduction band derived from Sn s orbitals (Table <ref>). In conclusion, we performed a detailed first-principles study of the optoelectronic properties of a set of representative Cs_2BB'Cl_6 double perovskites and compared them with those of the single perovskite CsPbCl_3 with its known hydrogenic exciton series. Chemical substitution at the B and B' metal sites allows for the realization of a wide variety of electronic structure properties with significant orbital-dependent effects on the anisotropy of charge carrier effective masses and dielectric screening. Furthermore, the band gaps of these perovskites also vary by several eV leading to profound differences in the screening of excited electron-hole pairs. Our calculations show that the chemical heterogeneity inherently present in double perovskites due to their alternating B and B' site-centered metal-halide octahedra only leads to non-hydrogenic excitons for B and B' site combinations that result in indirect band gaps and large effective mass anisotropies at the band edges. In these double perovskites, excitons are strongly delocalized in reciprocal space and thus experience the full spatial variation of the dielectric screening. On the other hand, excitons in perovskites with isotropic and disperse band edges and delocalized valence and conduction band states are more localized in reciprocal space. We show that our understanding of excitons in halide double perovskites is more general and can be extended to vacancy-ordered perovskites with a single B-site. The nature of excitons in double perovskites can thus be predicted based on electronic structure properties that can be extracted from computationally efficient DFT calculations. With these insights our state-of-the-art GW+BSE calculations can provide a starting point for the development of tight-binding models for excitons, aid in the interpretation of experiments, and inspire further study of excited state properties of this complex quarternary family of materials. § SUPPORTING INFORMATION Methodological and computational details (including convergence plots), DFT and QP electronic band structures, absorption spectra, exciton diagrams, radial probability density, and variation of dielectric constant in reciprocal space. We acknowledge computing resources provided by the Dutch national supercomputing center Snellius supported by the SURF Cooperative and PRACE for awarding access to the Marconi100 supercomputer at CINECA, Italy. This work was partially supported by the Dutch Research Council (NWO) under grant number OCENW.M20.337. MRF acknowledges support from the UK Engineering and Physical Sciences Research Council (EPSRC), Grant EP/V010840/1. 71 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Cross and Hillebrand(1883)]Cross1883 author author C. W. Cross and author W. F. Hillebrand, title title On minerals of the cryolite group recently found in Colorado, https://doi.org/10.2475/ajs.s3-26.154.271 journal journal Am. J. Sci. volume s3-26, pages 271 (year 1883)NoStop [Brittain and Meyer(1985)]Brittain1985 author author H. G. Brittain and author G. Meyer, title title Cryogenic luminescence studies of Eu^3+ in LiEuCl_4, https://doi.org/https://doi.org/10.1016/0022-4596(85)90317-2 journal journal J. Solid State Chem. volume 59, pages 183 (year 1985)NoStop [Pawlik and Spaeth(1997)]Pawlik1997 author author T. Pawlik and author J.-M. Spaeth, title title Investigation of radiation-induced defects in Cs_2NaYF_6, https://doi.org/https://doi.org/10.1002/1521-3951(199709)203:1<43::AID-PSSB43>3.0.CO;2-F journal journal Phys. Status Solidi B volume 203, pages 43 (year 1997)NoStop [McClure et al.(2016)McClure, Ball, Windl, and Woodward]McClure2016 author author E. T. McClure, author M. R. Ball, author W. Windl, and author P. M. Woodward, title title Cs2AgBiX6 (X = Br, Cl) — New visible light absorbing, lead-free halide perovskite semiconductors, https://doi.org/10.1021/acs.chemmater.5b04231 journal journal Chem. Mater volume 6, pages 1348 (year 2016)NoStop [Slavney et al.(2016)Slavney, Hu, Lindenberg, and Karunadasa]Slavney2016a author author A. H. Slavney, author T. Hu, author A. M. Lindenberg, and author H. I. Karunadasa, title title A Bismuth-Halide Double Perovskite with Long Carrier Recombination Lifetime for Photovoltaic Applications, https://doi.org/10.1021/jacs.5b13294 journal journal J. Am. Chem. Soc. volume 138, pages 2138 (year 2016)NoStop [Greul et al.(2017)Greul, Petrus, Binek, Docampo, and Bein]Greul2017 author author E. Greul, author M. L. Petrus, author A. Binek, author P. Docampo, and author T. Bein, title title Highly stable, phase pure Cs_2AgBiBr_6 double perovskite thin films for optoelectronic applications, https://doi.org/10.1039/c7ta06816f journal journal J. Mater. Chem. A volume 5, pages 19972 (year 2017)NoStop [Zhao et al.(2017)Zhao, Yang, Fu, Yang, Xu, Yu, Wei, and Zhang]Zhao2017 author author X.-G. Zhao, author J.-H. Yang, author Y. Fu, author D. Yang, author Q. Xu, author L. Yu, author S.-H. Wei, and author L. Zhang, title title Design of lead-free inorganic halide perovskites for solar cells via cation-transmutation, https://doi.org/10.1021/jacs.6b09645 journal journal J. Am. Chem. Soc. volume 139, pages 2630 (year 2017)NoStop [Volonakis et al.(2017)Volonakis, Haghighirad, Milot, Sio, Filip, Wenger, Johnston, Herz, Snaith, and Giustino]Volonakis2017 author author G. Volonakis, author A. A. Haghighirad, author R. L. Milot, author W. H. Sio, author M. R. Filip, author B. Wenger, author M. B. Johnston, author L. M. Herz, author H. J. Snaith, and author F. Giustino, title title Cs_2InAgCl_6 : A New Lead-free Halide Double Perovskite with Direct Band Gap, https://doi.org/10.1021/acs.jpclett.6b02682 journal journal J. Phys. Chem. Lett. volume 8, pages 772 (year 2017)NoStop [Debbichi et al.(2018)Debbichi, Lee, Cho, Rappe, Hong, Jang, and Kim]Debbichi2018 author author L. Debbichi, author S. Lee, author H. Cho, author A. M. Rappe, author K.-H. Hong, author M. S. Jang, and author H. Kim, title title Mixed Valence Perovskite Cs2Au2I6 : A Potential Material for Thin-Film Pb-Free Photovoltaic Cells with Ultrahigh Efficiency, https://doi.org/10.1002/adma.201707001 journal journal Adv. Mater. volume 30, pages 1707001 (year 2018)NoStop [Fakharuddin et al.(2019)Fakharuddin, Shabbir, Qiu, Iqbal, Sultan, Heremans, and Schmidt-Mende]Fakharuddin2019 author author A. Fakharuddin, author U. Shabbir, author W. Qiu, author T. Iqbal, author M. Sultan, author P. Heremans, and author L. Schmidt-Mende, title title Inorganic and layered perovskites for optoelectronic devices, https://doi.org/https://doi.org/10.1002/adma.201807095 journal journal Adv. Mater. volume 31, pages 1807095 (year 2019)NoStop [Yang et al.(2020)Yang, Chen, Liu, Xiang, Wang, Ran, Zhou, and Shao]Yang2020 author author X. Yang, author Y. Chen, author P. Liu, author H. Xiang, author W. Wang, author R. Ran, author W. Zhou, and author Z. Shao, title title Simultaneous Power Conversion Efficiency and Stability Enhancement of Cs_2AgBiBr_6 Lead-Free Inorganic Perovskite Solar Cell through Adopting a Multifunctional Dye Interlayer, https://doi.org/10.1002/adfm.202001557 journal journal Adv. Func. Mater. volume 30, pages 2001557 (year 2020)NoStop [Longo et al.(2020)Longo, Mahesh, Buizza, Wright, Ramadan, Abdi-Jalebi, Nayak, Herz, and Snaith]Longo2020 author author G. Longo, author S. Mahesh, author L. R. V. Buizza, author A. D. Wright, author A. J. Ramadan, author M. Abdi-Jalebi, author P. K. Nayak, author L. M. Herz, and author H. J. Snaith, title title Understanding the Performance-Limiting Factors of Cs_2AgBiBr_6 Double-Perovskite Solar Cells, https://doi.org/10.1021/acsenergylett.0c01020 journal journal ACS Energy Lett. volume 5, pages 2200 (year 2020)NoStop [Sirtl et al.(2022)Sirtl, Hooijer, Armer, Ebadi, Mohammadi, Maheu, Weis, van Gorkom, Häringer, Janssen, Mayer, Dyakonov, Tress, and Bein]sirtl2D3DHybrid2022 author author M. T. Sirtl, author R. Hooijer, author M. Armer, author F. G. Ebadi, author M. Mohammadi, author C. Maheu, author A. Weis, author B. T. van Gorkom, author S. Häringer, author R. A. J. Janssen, author T. Mayer, author V. Dyakonov, author W. Tress, and author T. Bein, title title 2D/3D Hybrid Cs_2AgBiBr_6 Double Perovskite Solar Cells: Improved Energy Level Alignment for Higher Contact-Selectivity and Large Open Circuit Voltage, https://doi.org/10.1002/aenm.202103215 journal journal Adv. Energy Mater. volume 12, pages 2103215 (year 2022)NoStop [Pan et al.(2017)Pan, Wu, Luo, Deng, Ge, Chen, Jiang, Yin, Niu, Zhu, Yin, Zhou, Xie, Ke, Sui, and Tang]panCs2AgBiBr6SinglecrystalXray2017 author author W. Pan, author H. Wu, author J. Luo, author Z. Deng, author C. Ge, author C. Chen, author X. Jiang, author W.-J. Yin, author G. Niu, author L. Zhu, author L. Yin, author Y. Zhou, author Q. Xie, author X. Ke, author M. Sui, and author J. Tang, title title Cs_2AgBiBr_6 single-crystal X-ray detectors with a low detection limit, https://doi.org/10.1038/s41566-017-0012-4 journal journal Nat. Photon volume 11, pages 726 (year 2017)NoStop [Steele et al.(2018a)Steele, Pan, Martin, Keshavarz, Debroye, Yuan, Banerjee, Fron, Jonckheere, Kim, Baekelant, Niu, Tang, Vanacken, Van der Auweraer, Hofkens, and Roeffaers]steelePhotophysicalPathwaysHighly2018 author author J. A. Steele, author W. Pan, author C. Martin, author M. Keshavarz, author E. Debroye, author H. Yuan, author S. Banerjee, author E. Fron, author D. Jonckheere, author C. W. Kim, author W. Baekelant, author G. Niu, author J. Tang, author J. Vanacken, author M. Van der Auweraer, author J. Hofkens, and author M. B. J. Roeffaers, title title Photophysical Pathways in Highly Sensitive Cs_2AgBiBr_6 Double-Perovskite Single-Crystal X-Ray Detectors, https://doi.org/10.1002/adma.201804450 journal journal Adv. Mater. volume 30, pages 1804450 (year 2018a)NoStop [Biswas and Du(2012)]Biswas2012 author author K. Biswas and author M. H. Du, title title Energy transport and scintillation of cerium-doped elpasolite Cs_2LiYCl_6: Hybrid density functional calculations, https://doi.org/10.1103/PhysRevB.86.014102 journal journal Phys. Rev. B volume 86, pages 1 (year 2012)NoStop [Yang et al.(2018)Yang, Mao, Hong, Meng, Tang, Xia, Yang, Deng, and Han]Yang2018 author author B. Yang, author X. Mao, author F. Hong, author W. Meng, author Y. Tang, author X. Xia, author S. Yang, author W. Deng, and author K. Han, title title Lead-Free Direct Band Gap Double-Perovskite Nanocrystals with Bright Dual-Color Emission, https://doi.org/10.1021/jacs.8b07424 journal journal J. Am. Chem. Soc. volume 140, pages 17001 (year 2018)NoStop [Luo et al.(2018)Luo, Wang, Li, Liu, Guo, Niu, Yao, Fu, Gao, Dong, Zhao, Leng, Ma, Liang, Wang, Jin, Han, Zhang, Etheridge, Wang, Yan, Sargent, and Tang]Luo2018 author author J. Luo, author X. Wang, author S. Li, author J. Liu, author Y. Guo, author G. Niu, author L. Yao, author Y. Fu, author L. Gao, author Q. Dong, author C. Zhao, author M. Leng, author F. Ma, author W. Liang, author L. Wang, author S. Jin, author J. Han, author L. Zhang, author J. Etheridge, author J. Wang, author Y. Yan, author E. H. Sargent, and author J. Tang, title title Efficient and stable emission of warm-white light from lead-free halide double perovskites, https://doi.org/10.1038/s41586-018-0691-0 journal journal Nature volume 563, pages 541 (year 2018)NoStop [Muscarella and Hutter(2022)]muscarellaHalideDoublePerovskiteSemiconductors2022 author author L. A. Muscarella and author E. M. Hutter, title title Halide Double-Perovskite Semiconductors beyond Photovoltaics, https://doi.org/10.1021/acsenergylett.2c00811 journal journal ACS Energy Lett. volume 7, pages 2128 (year 2022)NoStop [Brandt et al.(2015)Brandt, Stevanovi, Ginley, and Buonassisi]Brandt2015 author author R. E. Brandt, author V. Stevanovi, author D. S. Ginley, and author T. Buonassisi, title title Identifying defect-tolerant semiconductors with high minority carrier lifetimes : Beyond hybrid lead halide perovskites, https://doi.org/doi:10.1557/mrc.2015.26 journal journal MRS Commun. volume 5, pages 265 (year 2015)NoStop [Faber et al.(2016)Faber, Lindmaa, Von Lilienfeld, and Armiento]Faber2016 author author F. A. Faber, author A. Lindmaa, author O. A. Von Lilienfeld, and author R. Armiento, title title Machine Learning Energies of 2 Million Elpasolite (ABC_2D_6) Crystals, https://doi.org/10.1103/PhysRevLett.117.135502 journal journal Phys. Rev. Lett. volume 117, pages 135502 (year 2016)NoStop [Volonakis et al.(2016)Volonakis, Filip, Haghighirad, Sakai, Wenger, Snaith, and Giustino]Volonakis2016 author author G. Volonakis, author M. R. Filip, author A. A. Haghighirad, author N. Sakai, author B. Wenger, author H. J. Snaith, and author F. Giustino, title title Lead-Free Halide Double Perovskites via Heterovalent Substitution of Noble Metals, @noop journal journal J. Phys. Chem. Lett. volume 7, pages 1254 (year 2016)NoStop [Filip and Giustino(2018)]Filip2018 author author M. R. Filip and author F. Giustino, title title The geometric blueprint of perovskites, https://doi.org/10.1073/pnas.1719179115 journal journal Proc. Nat. Acad. Sci. volume 115, pages 5397 (year 2018)NoStop [Deng et al.(2016)Deng, Wei, Sun, Kieslich, Cheetham, and Bristowe]Deng2016 author author Z. Deng, author F. Wei, author S. Sun, author G. Kieslich, author A. K. Cheetham, and author P. D. Bristowe, title title Exploring the properties of lead-free hybrid double perovskites using a combined computational-experimental approach, https://doi.org/10.1039/c6ta05817e journal journal J. of Mater. Chem. A volume 4, pages 12025 (year 2016), https://arxiv.org/abs/1606.02916 arXiv:1606.02916 NoStop [Jain et al.(2017)Jain, Voznyy, and Sargent]Jain2017 author author A. Jain, author O. Voznyy, and author E. H. Sargent, title title High-Throughput Screening of Lead-Free Perovskite-like Materials for Optoelectronic Applications, https://doi.org/10.1021/acs.jpcc.7b02221 journal journal J. Phys. Chem. C volume 121, pages 7183 (year 2017)NoStop [Wolf et al.(2021)Wolf, Connor, Slavney, and Karunadasa]Wolf2021 author author N. R. Wolf, author B. A. Connor, author A. H. Slavney, and author H. I. Karunadasa, title title Doubling the Stakes: The Promise of Halide Double Perovskites, https://doi.org/10.1002/anie.202016185 journal journal Ang. Chem. Int. Ed. volume 60, pages 16264 (year 2021)NoStop [Schade et al.(2019)Schade, Wright, Johnson, Dollmann, Wenger, Nayak, Prabhakaran, Herz, Nicholas, Snaith, and Radaelli]Schade2019 author author L. Schade, author A. D. Wright, author R. D. Johnson, author M. Dollmann, author B. Wenger, author P. K. Nayak, author D. Prabhakaran, author L. M. Herz, author R. Nicholas, author H. J. Snaith, and author P. G. Radaelli, title title Structural and Optical Properties of Cs_2AgBiBr_6 Double Perovskite, https://doi.org/10.1021/acsenergylett.8b02090 journal journal ACS Energy Lett. volume 4, pages 299 (year 2019)NoStop [Roknuzzaman et al.(2019)Roknuzzaman, Zhang, Ostrikov, Du, Wang, Wang, and Tesfamichael]Roknuzzaman2019 author author M. Roknuzzaman, author C. Zhang, author K. K. Ostrikov, author A. Du, author H. Wang, author L. Wang, and author T. Tesfamichael, title title Electronic and optical properties of lead-free hybrid double perovskites for photovoltaic and optoelectronic applications, https://doi.org/10.1038/s41598-018-37132-2 journal journal Scientific Reports volume 9, pages 718 (year 2019)NoStop [Dey et al.(2020)Dey, Richter, Debnath, Polavarapu, and Feldmann]Dey2020 author author A. Dey, author A. F. Richter, author T. Debnath, author L. Polavarapu, and author J. Feldmann, title title Transfer of Direct to Indirect Bound Excitons by Electron Intervalley Scattering in Cs2AgBiBr6 Double Perovskite Nanocrystals, https://doi.org/10.1021/acsnano.0c00997 journal journal ACS Nano volume 14, pages 5855 (year 2020)NoStop [Elliott(1957)]Elliott1957 author author R. J. Elliott, title title Intensity of optical absorption by excitons, https://doi.org/10.1103/PhysRev.108.1384 journal journal Phys. Rev. volume 108, pages 1384 (year 1957)NoStop [Davies et al.(2018)Davies, Filip, Patel, Crothers, Verdi, Wright, Milot, Giustino, Johnston, and Herz]Davies2018 author author C. L. Davies, author M. R. Filip, author J. B. Patel, author T. W. Crothers, author C. Verdi, author A. D. Wright, author R. L. Milot, author F. Giustino, author M. B. Johnston, and author L. M. Herz, title title Bimolecular recombination in methylammonium lead triiodide perovskite is an inverse absorption process, https://doi.org/10.1038/s41467-017-02670-2 journal journal Nat. Comm volume 9, pages 293 (year 2018)NoStop [Wright et al.(2021)Wright, Buizza, Savill, Longo, Snaith, Johnston, and Herz]Wright2021 author author A. D. Wright, author L. R. Buizza, author K. J. Savill, author G. Longo, author H. J. Snaith, author M. B. Johnston, and author L. M. Herz, title title Ultrafast Excited-State Localization in Cs_2AgBiBr_6 Double Perovskite, https://doi.org/10.1021/acs.jpclett.1c00653 journal journal J. Phys. Chem. Lett. volume 12, pages 3352 (year 2021)NoStop [Miyata et al.(2015)Miyata, Mitioglu, Plochocka, Portugall, Wang, Stranks, Snaith, and Nicholas]Miyata2015a author author A. Miyata, author A. Mitioglu, author P. Plochocka, author O. Portugall, author J. T.-W. Wang, author S. D. Stranks, author H. J. Snaith, and author R. J. Nicholas, title title Direct measurement of the exciton binding energy and effective masses for charge carriers in organic–inorganic tri-halide perovskites, https://doi.org/10.1038/nphys3357 journal journal Nat. Phys. volume 11, pages 582 (year 2015)NoStop [Hirasawa et al.(1994)Hirasawa, Ishihara, Goto, Uchida, and Miura]Hirasawa1994a author author M. Hirasawa, author T. Ishihara, author T. Goto, author K. Uchida, and author N. Miura, title title Magnetoabsorption of the lowest exciton in perovskite-type compound (CH_3NH_3)PbI_3, https://doi.org/10.1016/0921-4526(94)91130-4 journal journal Phys. B Condens. Matter volume 201, pages 427 (year 1994)NoStop [Tanaka et al.(2003)Tanaka, Takahashi, Ban, Kondo, Uchida, and Miura]tanakaComparativeStudyExcitons2003 author author K. Tanaka, author T. Takahashi, author T. Ban, author T. Kondo, author K. Uchida, and author N. Miura, title title Comparative study on the excitons in lead-halide-based perovskite-type crystals CH_3NH_3PbBr_3 CH_3NH_3PbI_3, https://doi.org/10.1016/S0038-1098(03)00566-0 journal journal Solid State Commun. volume 127, pages 619 (year 2003)NoStop [Galkowski et al.(2016)Galkowski, Mitioglu, Miyata, Plochocka, Portugall, Eperon, Wang, Stergiopoulos, Stranks, Snaith, and Nicholas]Galkowski2016a author author K. Galkowski, author A. Mitioglu, author A. Miyata, author P. Plochocka, author O. Portugall, author G. E. Eperon, author J. T.-W. Wang, author T. Stergiopoulos, author S. D. Stranks, author H. Snaith, and author R. J. Nicholas, title title Determination of the exciton binding energy and effective masses for methylammonium and formamidinium lead tri-halide perovskite semiconductors, https://doi.org/10.1039/C5EE03435C journal journal Energy Env. Sci volume 9, pages 962 (year 2016)NoStop [Wang et al.(2022)Wang, Dyksik, Lampe, Gramlich, Maude, Baranowski, Urban, Plochocka, and Surrente]wangThicknessDependentDarkBrightExciton2022 author author S. Wang, author M. Dyksik, author C. Lampe, author M. Gramlich, author D. K. Maude, author M. Baranowski, author A. S. Urban, author P. Plochocka, and author A. Surrente, title title Thickness-Dependent Dark-Bright Exciton Splitting and Phonon Bottleneck in CsPbBr_3-Based Nanoplatelets Revealed via Magneto-Optical Spectroscopy, https://doi.org/10.1021/acs.nanolett.2c01826 journal journal Nano Lett. volume 22, pages 7011 (year 2022)NoStop [Chernikov et al.(2014)Chernikov, Berkelbach, Hill, Rigosi, Li, Aslan, Reichman, Hybertsen, and Heinz]chernikovExcitonBindingEnergy2014 author author A. Chernikov, author T. C. Berkelbach, author H. M. Hill, author A. Rigosi, author Y. Li, author B. Aslan, author D. R. Reichman, author M. S. Hybertsen, and author T. F. Heinz, title title Exciton Binding Energy and Nonhydrogenic Rydberg Series in Monolayer ${\mathrm{WS}}_{2}$, https://doi.org/10.1103/PhysRevLett.113.076802 journal journal Phys. Rev. Lett. volume 113, pages 076802 (year 2014)NoStop [Selig et al.(2016)Selig, Berghäuser, Raja, Nagler, Schüller, Heinz, Korn, Chernikov, Malic, and Knorr]seligExcitonicLinewidthCoherence2016 author author M. Selig, author G. Berghäuser, author A. Raja, author P. Nagler, author C. Schüller, author T. F. Heinz, author T. Korn, author A. Chernikov, author E. Malic, and author A. Knorr, title title Excitonic linewidth and coherence lifetime in monolayer transition metal dichalcogenides, https://doi.org/10.1038/ncomms13279 journal journal Nat Commun volume 7, pages 13279 (year 2016)NoStop [Raja et al.(2017)Raja, Chaves, Yu, Arefe, Hill, Rigosi, Berkelbach, Nagler, Schüller, Korn, Nuckolls, Hone, Brus, Heinz, Reichman, and Chernikov]rajaCoulombEngineeringBandgap2017 author author A. Raja, author A. Chaves, author J. Yu, author G. Arefe, author H. M. Hill, author A. F. Rigosi, author T. C. Berkelbach, author P. Nagler, author C. Schüller, author T. Korn, author C. Nuckolls, author J. Hone, author L. E. Brus, author T. F. Heinz, author D. R. Reichman, and author A. Chernikov, title title Coulomb engineering of the bandgap and excitons in two-dimensional materials, https://doi.org/10.1038/ncomms15251 journal journal Nat Commun volume 8, pages 15251 (year 2017)NoStop [Wannier(1937)]Wannier1937 author author G. H. Wannier, title title The Structure of Electronic Excitation Levels in Insulating Crystals, @noop journal journal Phys. Rev. volume 52, pages 191 (year 1937)NoStop [Fetter and Walecka(1971)]Fetter1971 author author A. Fetter and author J. Walecka, @noop title Quantum Theory of Many-Particle Systems (publisher MacGraw-Hill, address New York, year 1971)NoStop [Rohlfing and Louie(1998)]Rohlfing1998 author author M. Rohlfing and author S. G. Louie, title title Electron-hole excitations in semiconductors and insulators, https://doi.org/10.1103/PhysRevLett.81.2312 journal journal Phys. Rev. Lett. volume 81, pages 2312 (year 1998)NoStop [Rohlfing and Louie(2000)]Rohlfing2000 author author M. Rohlfing and author S. G. Louie, title title Electron-hole excitations and optical spectra from first principles, https://doi.org/10.1103/PhysRevB.62.4927 journal journal Phys. Rev. B volume 62, pages 4927 (year 2000)NoStop [Bokdam et al.(2016)Bokdam, Sander, Stroppa, Picozzi, Sarma, Franchini, and Kresse]Bokdam2016 author author M. Bokdam, author T. Sander, author A. Stroppa, author S. Picozzi, author D. D. Sarma, author C. Franchini, and author G. Kresse, title title Role of Polar Phonons in the Photo Excited State of Metal Halide Perovskites, https://doi.org/10.1038/srep28618 journal journal Sci. Rep. volume 6, pages 28618 (year 2016)NoStop [Filip et al.(2021)Filip, Haber, and Neaton]Filip2021 author author M. R. Filip, author J. B. Haber, and author J. B. Neaton, title title Phonon Screening of Excitons in Semiconductors: Halide Perovskites and beyond, https://doi.org/10.1103/PhysRevLett.127.067401 journal journal Phys. Rev. Lett. volume 127, pages 067401 (year 2021)NoStop [Palummo et al.(2020)Palummo, Berrios, Varsano, and Giorgi]Palummo2020 author author M. Palummo, author E. Berrios, author D. Varsano, and author G. Giorgi, title title Optical Properties of Lead-Free Double Perovskites by Ab Initio Excited-State Methods, https://doi.org/10.1021/acsenergylett.9b02593 journal journal ACS Energy Lett. volume 5, pages 457 (year 2020)NoStop [Biega et al.(2021)Biega, Filip, Leppert, and Neaton]Biega2021 author author R. I. Biega, author M. R. Filip, author L. Leppert, and author J. B. Neaton, title title Chemically Localized Resonant Excitons in Silver-Pnictogen Halide Double Perovskites, https://doi.org/10.1021/ACS.JPCLETT.0C03579/SUPPL_FILE/JZ0C03579_SI_001.PDF journal journal J. Phys. Chem. Lett. volume 12, pages 2057 (year 2021), https://arxiv.org/abs/2102.05699 arXiv:2102.05699 NoStop [Meng et al.(2017)Meng, Wang, XIAO, Wang, Mitzi, and Yan]Meng2017 author author W. Meng, author X. Wang, author ZEWEN. XIAO, author J. Wang, author D. B. Mitzi, and author Y. Yan, title title Parity-Forbidden Transitions and Their Impacts on the Optical Absorption Properties of Lead-Free Metal Halide Perovskites and Double Perovskites, https://doi.org/10.1021/acs.jpclett.7b01042 journal journal J. Phys. Chem. Lett. volume 8, pages 2999 (year 2017)NoStop [Jain et al.(2022)Jain, Bhumla, Kumar, and Bhattacharya]Jain2022 author author M. Jain, author P. Bhumla, author M. Kumar, and author S. Bhattacharya, title title Lead-free alloyed double perovskites: An emerging class of materials for optoelectronic applications, https://doi.org/10.1021/acs.jpcc.2c00102 journal journal J. Phys. Chem. C volume 126, pages 6753 (year 2022)NoStop [Luo et al.(2022)Luo, Zhao, Yang, Chen, Zhang, and Luo]Luo2022 author author M. Luo, author Y. Zhao, author A. Yang, author Q. Chen, author X. Zhang, and author J. Luo, title title First-principles study on the electronic structures and optical properties of cs2xincl6 (x= ag, na), https://doi.org/10.1016/j.ssc.2022.114812 journal journal Solid State Commun. volume 352, pages 114812 (year 2022)NoStop [Yu et al.(2023)Yu, Ri, Ri, Jang, Kim, and Jong]Yu2023 author author C.-J. Yu, author I.-C. Ri, author H.-M. Ri, author J.-H. Jang, author Y.-S. Kim, and author U.-G. Jong, title title First-principles study on structural, electronic and optical properties of halide double perovskite Cs_2AgBX_6 (B = In, Sb; X = F, Cl, Br, I), https://doi.org/10.1039/D3RA02566G journal journal RSC Adv. volume 13, pages 16012 (year 2023)NoStop [Adhikari and Johari(2023)]Adhikari2023 author author S. Adhikari and author P. Johari, title title Theoretical insights into monovalent-metal-cation transmutation effects on lead-free halide double perovskites for optoelectronic applications, https://doi.org/Accepted Paper journal journal Phys. Rev. Mater. (year 2023)NoStop [Tran et al.(2017)Tran, Panella, Chamorro, Morey, and McQueen]Tran2017 author author T. T. Tran, author J. R. Panella, author J. R. Chamorro, author J. R. Morey, and author T. M. McQueen, title title Designing indirect-direct bandgap transitions in double perovskites, https://doi.org/10.1039/c7mh00239d journal journal Mater. Horiz. volume 4, pages 688 (year 2017)NoStop [Steele et al.(2018b)Steele, Pan, Martin, Keshavarz, Debroye, Yuan, Banerjee, Fron, Jonckheere, Kim, Baekelant, Niu, Tang, Vanacken, Van der Auweraer, Hofkens, and Roeffaers]Steele2018 author author J. A. Steele, author W. Pan, author C. Martin, author M. Keshavarz, author E. Debroye, author H. Yuan, author S. Banerjee, author E. Fron, author D. Jonckheere, author C. W. Kim, author W. Baekelant, author G. Niu, author J. Tang, author J. Vanacken, author M. Van der Auweraer, author J. Hofkens, and author M. B. Roeffaers, title title Photophysical Pathways in Highly Sensitive Cs_2AgBiBr_6 Double-Perovskite Single-Crystal X-Ray Detectors, https://doi.org/10.1002/adma.201804450 journal journal Adv. Mater. volume 30, pages 1 (year 2018b)NoStop [Bartesaghi et al.(2018)Bartesaghi, Slavney, Gélvez-Rueda, Connor, Grozema, Karunadasa, and Savenije]Bartesaghi2018a author author D. Bartesaghi, author A. H. Slavney, author M. C. Gélvez-Rueda, author B. A. Connor, author F. C. Grozema, author H. I. Karunadasa, and author T. J. Savenije, title title Charge Carrier Dynamics in Cs_2AgBiBr_6 Double Perovskite, https://doi.org/10.1021/acs.jpcc.8b00572 journal journal J. Phys. Chem. C volume 122, pages 4809 (year 2018)NoStop [Dahl et al.(2019)Dahl, Osowiecki, Cai, Swabeck, Bekenstein, Asta, Chan, and Alivisatos]Dahl2019 author author J. C. Dahl, author W. T. Osowiecki, author Y. Cai, author J. K. Swabeck, author Y. Bekenstein, author M. Asta, author E. M. Chan, and author A. P. Alivisatos, title title Probing the stability and band gaps of Cs_2AgInCl_6 and Cs_2AgSbCl_6 lead-free double perovskite nanocrystals, https://doi.org/10.1021/acs.chemmater.8b04202 journal journal Chem. Mater. volume 31, pages 3134 (year 2019)NoStop [Majher et al.(2019)Majher, Gray, Strom, and Woodward]Majher2019 author author J. D. Majher, author M. B. Gray, author T. A. Strom, and author P. M. Woodward, title title Cs_2NaBiCl_6 :Mn^2+ - A New Orange-Red Halide Double Perovskite Phosphor, https://doi.org/10.1021/acs.chemmater.8b05280 journal journal Chem. Mater. volume 31, pages 1738 (year 2019)NoStop [Noculak et al.(2020)Noculak, Noculak, Morad, Morad, McCall, Yakunin, Shynkarenko, Yakunin, Shynkarenko, Wörle, and Kovalenko]Noculak2020 author author A. Noculak, author A. Noculak, author V. Morad, author V. Morad, author K. M. McCall, author S. Yakunin, author Y. Shynkarenko, author S. Yakunin, author Y. Shynkarenko, author M. Wörle, and author M. V. Kovalenko, title title Bright Blue and Green Luminescence of Sb(III) in Double Perovskite Cs_2MInCl_6(M = Na, K) Matrices, https://doi.org/10.1021/acs.chemmater.0c01004 journal journal Chem. Mater. volume 32, pages 5118 (year 2020)NoStop [Gray et al.(2020)Gray, Hariyani, Strom, Majher, Brgoch, and Woodward]Gray2020 author author M. B. Gray, author S. Hariyani, author T. A. Strom, author J. D. Majher, author J. Brgoch, and author P. M. Woodward, title title High-efficiency blue photoluminescence in the Cs_2NaInCl_6: Sb^3+ double perovskite phosphor, https://doi.org/10.1039/D0TC01037E journal journal J. Mater. Chem. C volume 8, pages 6797 (year 2020)NoStop [Deslippe et al.(2012)Deslippe, Samsonidze, Strubbe, Jain, Cohen, and Louie]Deslippe2012 author author J. Deslippe, author G. Samsonidze, author D. A. Strubbe, author M. Jain, author M. L. Cohen, and author S. G. Louie, title title BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures, https://doi.org/10.1016/j.cpc.2011.12.006 journal journal Comp. Phys. Commun. volume 183, pages 1269 (year 2012)NoStop [Barker et al.(2022)Barker, Deslippe, Lischner, Jain, Yazyev, Strubbe, and Louie]barkerSpinorGWBetheSalpeter2022 author author B. A. Barker, author J. Deslippe, author J. Lischner, author M. Jain, author O. V. Yazyev, author D. A. Strubbe, and author S. G. Louie, title title Spinor $GW$/Bethe-Salpeter calculations in BerkeleyGW: Implementation, symmetries, benchmarking, and performance, https://doi.org/10.1103/PhysRevB.106.115127 journal journal Phys. Rev. B volume 106, pages 115127 (year 2022)NoStop [Even et al.(2015)Even, Pedesseau, Katan, Kepenekian, Lauret, Sapori, and Deleporte]Even2015a author author J. Even, author L. Pedesseau, author C. Katan, author M. Kepenekian, author J.-S. Lauret, author D. Sapori, and author E. Deleporte, title title A Solid State Physics Perspective on Hybrid Perovskite Semiconductors, @noop journal journal J. Phys. Chem. C , pages 10161 (year 2015)NoStop [Becker et al.(2018)Becker, Vaxenburg, Nedelcu, Sercel, Shabaev, Mehl, Michopoulos, Lambrakos, Bernstein, Lyons, Stöferle, Mahrt, Kovalenko, Norris, Rainò, and Efros]beckerBrightTripletExcitons2018 author author M. A. Becker, author R. Vaxenburg, author G. Nedelcu, author P. C. Sercel, author A. Shabaev, author M. J. Mehl, author J. G. Michopoulos, author S. G. Lambrakos, author N. Bernstein, author J. L. Lyons, author T. Stöferle, author R. F. Mahrt, author M. V. Kovalenko, author D. J. Norris, author G. Rainò, and author A. L. Efros, title title Bright triplet excitons in caesium lead halide perovskites, https://doi.org/10.1038/nature25147 journal journal Nature volume 553, pages 189 (year 2018)NoStop [Biffi et al.(2023)Biffi, Cho, Krahne, and Berkelbach]biffiExcitonsTheirFine2023 author author G. Biffi, author Y. Cho, author R. Krahne, and author T. C. Berkelbach, title title Excitons and Their Fine Structure in Lead Halide Perovskite Nanocrystals from Atomistic GW/BSE Calculations, https://doi.org/10.1021/acs.jpcc.2c07111 journal journal J. Phys. Chem. C volume 127, pages 1891 (year 2023)NoStop [Yu and Cardona()]Yu2001 author author P. Y. Yu and author M. Cardona, https://doi.org/10.1007/978-3-642-00710-1 title Fundamentals of Semiconductors (publisher Springer)NoStop [Schindlmayr(1997)]Schindlmayr1997 author author A. Schindlmayr, title title Excitons with anisotropic effective mass, https://doi.org/10.1088/0143-0807/18/5/011 journal journal Euro. J. Phys. volume 18, pages 374 (year 1997)NoStop [Cappellini et al.(1993)Cappellini, Sole, and Bechstedt]Cappellini1993 author author G. Cappellini, author R. D. Sole, and author F. Bechstedt, title title Model dielectric function for semiconductors, @noop journal journal Phys. Rev. B volume 47, pages 9892 (year 1993)NoStop [Slavney et al.(2019)Slavney, Connor, Leppert, and Karunadasa]Slavney2019 author author A. H. Slavney, author B. A. Connor, author L. Leppert, and author H. I. Karunadasa, title title A pencil-and-paper method for elucidating halide double perovskite band structures, https://doi.org/10.1039/C9SC03219C journal journal Chem. Sci. volume 10, pages 11041 (year 2019)NoStop [Kavanagh et al.(2022)Kavanagh, Savory, Liga, Konstantatos, Walsh, and Scanlon]Kavanagh2022 author author S. R. Kavanagh, author C. N. Savory, author S. M. Liga, author G. Konstantatos, author A. Walsh, and author D. O. Scanlon, title title Frenkel excitons in vacancy-ordered titanium halide perovskites (Cs_2TiX_6), https://doi.org/10.1021/acs.jpclett.2c02436 journal journal J. Phys. Chem. Lett. volume 13, pages 10965 (year 2022)NoStop [Cucco et al.(2023)Cucco, Katan, Even, Kepenekian, and Volonakis]Cucco2023 author author B. Cucco, author C. Katan, author J. Even, author M. Kepenekian, and author G. Volonakis, title title Fine structure of excitons in vacancy-ordered halide double perovskites, https://doi.org/10.1021/acsmaterialslett.2c01010 journal journal ACS Mater. Lett. volume 5, pages 52 (year 2023)NoStop
http://arxiv.org/abs/2306.01435v1
20230602104935
Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics
[ "Zonghan Yang", "Peng Li", "Tianyu Pang", "Yang Liu" ]
cs.LG
[ "cs.LG", "stat.ML" ]
[ Improving Adversarial Robustness of Deep Equilibrium Models with Explicit Regulations Along the Neural Dynamics equal* Zonghan Yangthu Peng Liair,shlab Tianyu Pangsea Yang Liuthu,air,shlab thuDept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China airInstitute for AI Industry Research (AIR), Tsinghua University, Beijing, China shlabShanghai Artificial Intelligence Laboratory, Shanghai, China seaSea AI Lab, Singapore Peng [email protected] Yang [email protected] Machine Learning, ICML 0.3in ] =-1 Deep equilibrium (DEQ) models replace the multiple-layer stacking of conventional deep networks with a fixed-point iteration of a single-layer transformation. Having been demonstrated to be competitive in a variety of real-world scenarios, the adversarial robustness of general DEQs becomes increasingly crucial for their reliable deployment. Existing works improve the robustness of general DEQ models with the widely-used adversarial training (AT) framework, but they fail to exploit the structural uniquenesses of DEQ models. To this end, we interpret DEQs through the lens of neural dynamics and find that AT under-regulates intermediate states. Besides, the intermediate states typically provide predictions with a high prediction entropy. Informed by the correlation between the entropy of dynamical systems and their stability properties, we propose reducing prediction entropy by progressively updating inputs along the neural dynamics. During AT, we also utilize random intermediate states to compute the loss function. Our methods regulate the neural dynamics of DEQ models in this manner. Extensive experiments demonstrate that our methods substantially increase the robustness of DEQ models and even outperform the strong deep network baselines. § INTRODUCTION Deep equilibrium (DEQ) models <cit.> are a type of novel neural architecture. Different from traditional deep networks with multiple stacked layers, DEQ models explicitly cast the forward propagation as a fixed-point iteration process with a single-layer transformation: ^⋆ = f_θ(^⋆; ), =-1 where f_θ is the transformation parameterized with θ, is the input, and ^⋆ is the equilibrium solved by fixed-point solvers. While taking O(1) memory cost because of the single layer, DEQ models are validated to attain competitive performance compared with state-of-the-art traditional deep networks in different applications <cit.>. Promising in real-world practice, DEQ models necessitate adversarial robustness for their reliable deployment, which however remains underexplored. Most existing works that study robust DEQ models are dedicated to certifying the robustness <cit.> of monotone DEQ. Monotone DEQ <cit.> is a type of DEQ model that enjoys equilibrium convergence guarantees but requires sophisticated layer and weight parameterization. In addition, the scalability of robustness certification methods also limits the scope of these prior arts for practical use. In contrast, <cit.> focus on general DEQs and study their empirical adversarial robustness by accelerating the gradient-based attacks. <cit.> propose white-box robustness evaluation protocols for general DEQs and conduct fair comparisons between DEQs and traditional deep networks under the adversarial training (AT) framework <cit.>. As reported in <cit.>, however, the white-box robustness performance of general DEQs still falls behind their deep network counterparts. As adversarial training is a general technique that can be applied to all kinds of differentiable neural architectures, we ask the following question: Is it possible to exploit the structural uniquenesses of DEQ models to further improve their adversarial robustness? Fortunately, the neural dynamics perspective for DEQ models brings insights into the problem. The neural dynamics perspective interprets the evolution of intermediate states in a neural model as a dynamical system <cit.>. This perspective is naturally suitable for DEQs, as their structure in Eq. (<ref>) explicitly formulates the neural dynamics. From this perspective, robust neural models correspond to neural dynamics without a drastic shift in the terminal state given a perturbed initial state <cit.>, and AT enforces the terminal state of neural dynamics to give similar predictions whether the input is clean or perturbed <cit.>. Shown in Figure <ref>, for DEQ models, AT does not explicitly regulate intermediate states along neural dynamics. However, <cit.> shows that even a DEQ model is trained by AT, its intermediate states can still be attacked, leading to poor robustness performance. This finding implies the structural specialty of DEQs differentiating from deep networks, and paves the way for explicit regulations along the neural dynamics to improve their adversarial robustness. In this work, we exploit the structural properties of DEQs to explicitly regulate their neural dynamics for improved robustness. Drawing inspiration from the entropy in dynamical systems and its implications on system stability and robustness, we propose to reduce prediction entropy by progressively updating the inputs along the DEQ neural dynamics. We also randomly select intermediate states along the neural dynamics for loss computation in adversarial training. In this way, our methods integrate explicit regulations along the neural dynamics of DEQ models, and boost the robustness of general DEQs: On the standard white-box robustness evaluation benchmark CIFAR-10 with perturbation range ℓ_∞ = 8/255, our DEQs achieve significantly better performance in white-box adversarial robustness compared with the results in <cit.>, and even outperform the strong deep network baselines with benchmarked adversarial robustness results in <cit.>. We have also conducted several ablation studies to validate the effectiveness of our proposed methods. Our code is available at <https://github.com/minicheshire/DEQ-Regulating-Neural-Dynamics>. § PRELIMINARIES =-1 Deep equilibrium models are a class of emerging neural architecture <cit.>. Of all deep networks, the closest resemblance to a DEQ model is an M-layer deep network with weight sharing and input injection. The forward propagation process of such a deep network would be ^[m+1] = f_θ(^[m]; ),  ^[0]=0, where ∈ℝ^l is the input, and ^[m]∈ℝ^d is the intermediate state after the m-th layer with m=0, ⋯, M-1. f_θ: ℝ^d × l→ℝ^d forms the transformation at each layer, and θ is the weight shared across different layers of the deep network. When implementing this network in an automatic differentiation engine (e.g., PyTorch <cit.>), the f_θ(·; ) transformation is sequentially compounded for M times, and all the intermediate states ^[1]∼^[M] need to be stored. DEQ models seek the limit of Eq. (<ref>) when the number of layers goes infinity: Assuming the convergence of the process, as m →∞, the state ^[m] converges to the equilibrium ^⋆ with ^⋆ = f_θ(^⋆; ), as stated in Eq. (<ref>). DEQ models cast the “infinite” forward process of Eq. (<ref>) as a fixed-point iteration process to solve for the equilibrium ^⋆ in Eq. (<ref>). While the most straightforward way to do this is exactly Eq. (<ref>), in DEQs, advanced fixed-point solvers (e.g., Broyden's method <cit.>) are used to accelerate the iteration convergence. For simplicity, we abusively refer to 's as the intermediate states of DEQs in the following and discard Eq. (<ref>). After N iterations of the forward solver, the ^[N] is returned and is numerically treated as ^⋆ = ^[N]. Now we provide the formal definition for the neural dynamics of DEQ models, which are at the heart of our study. Neural dynamics reflect the evolution of the intermediate states in a neural model. For DEQ models, the neural dynamics consist of the sequence {^[1], ⋯, ^[N]}, which satisfies ^[t+1] = Solve( = f_θ(; ); ^[≤ t]) for 0≤ t<N and ^[0]=0. Solve is the fixed-point solver for the forward process in DEQs, which is usually instantiated with the Broyden's method <cit.>. At iteration t, the solver Solve bases on ^[t] to compute the next intermediate state ^[t+1] for the fixed-point equation. While {^[t]} are not stored in memory, <cit.> demonstrate that the intermediate states ^[t]^⋆ exhibit higher robustness than the equilibrium state ^⋆, and attacks can be constructed for the intermediate ^[t]s. In our work, we explicitly regulate the behavior of all {^[t]} along the neural dynamics in DEQ models to improve their robustness. § METHODOLOGY In this section, we demonstrate our approaches that facilitate explicit regulations along the neural dynamics of DEQs to improve robustness. We start with a short overview with two observations about the structural properties of DEQs in Sec. <ref>. We then exploit the two uniquenesses of DEQs and propose two regulation methods in Secs. <ref> and <ref>. §.§ Overview =-1 Suppose a trained DEQ model on an image classification task. Its weight θ is fixed, and the forward iteration number N is constant. From Eq. (<ref>), it can be seen that the neural dynamics are fully decided by the input . How does the perturbation to affect the neural dynamics in the DEQ model? Assume that a clean input induces {^[t]}, and a perturbed input + Δ induces {^[t]}. To get an intuitive understanding of the difference between them, we replace the Solve in Eq. (<ref>) with the most straightforward unrolling for all t=1, ⋯, N. For intermediate step t, we have ^[t+1] = f_θ(^[t]; ), ^[t+1] = f_θ(^[t]; + Δ). The difference between ^[t+1] and ^[t+1] reads .99999!^[t+1] - ^[t+1] = f_θ(^[t]; + Δ) - f_θ(^[t]; ) = f_θ(^[t];+Δ)-f_θ(^[t];) + f_θ(^[t];) - f_θ(^[t];) ≤ f_θ(^[t];+Δ) - f_θ(^[t];)_Perturbation from +f_θ(^[t];) - f_θ(^[t];)_Accumulation in . According to Eq. (<ref>), the difference between ^[t+1] and ^[t+1] is inherited from those between ^[t] and ^[t], and further amplified by the perturbed input + Δ. Fortunately, DEQs differentiate from traditional deep networks <cit.> in two structural uniquenesses: (i) The input is involved in each iteration along the neural dynamics of DEQs. In contrast, conventional deep residual networks do not follow a layer-wise input-injection design. (ii) All of the intermediate states along the neural dynamics can be seamlessly sent into the classification head of the DEQ model for predictions. By comparison, for traditional deep networks like ResNets, the intermediate representations are often of different shapes from the input of the top classification layer. According to the two properties in DEQ models, we propose two methods for neural dynamics regulation in Secs. <ref> and <ref>. §.§ Input Entropy Reduction Along Neural Dynamics §.§.§ Observations of Prediction Entropy As noted in Sec. <ref>, the input is applied along the neural dynamics in DEQ models, and can be either clean or perturbed. A well-trained DEQ model exhibits neural dynamics that always obey a converged fixed-point iteration process with any inputs. Shown in Fig. <ref>-(a), for a DEQ model trained with AT, the relative error at the equilibrium state f_θ(^[N]; ) - ^[N]_2 / f_θ(^[N]; ) _2 follows similar distributions with either clean or perturbed input. An adversarial example +Δ leads to the incorrect prediction of ^[N]. However, the initial state ^[0] in DEQ models is always set to 0. This implies that with a perturbed +Δ, the neural dynamics {^[t]} diverge from the {^[t]} under the clean in predictions. While the DEQ model is determined in the fixed-point convergence of the neural dynamics, we investigate whether the intermediate states along the neural dynamics are “determined” in their predictions. To characterize the “determination” of predictions, we adopt entropy as the measurement. On the one hand, a higher prediction entropy indicates a more flat probability distribution, with smaller probability differences among different classes. On the other hand, the theory of entropy in dynamical systems <cit.> shows that a dynamical system with higher entropy indicates larger Lyapunov exponents, therefore more inclined to be unstable (see a brief discussion about this in Appendix <ref>). Inspired by this, we compute the prediction entropy of an intermediate state ^[t]. Formally, the prediction entropy of ^[t] is defined as 0.95 H(^[t]) = -∑_j=1^C p_j^[t]log p_j^[t], =-1 where 𝐩^[t] = h_Φ(^[t]) is the prediction logits vector with 𝐩^[t] = [p_j^[t]] ∈ℝ^C. h_Φ: ℝ^d→ℝ^C is the classification head in the DEQ model with parameters Φ and C classes. We use Eq. (<ref>) to investigate the prediction entropy of the equilibrium state in DEQ models with clean or perturbed inputs. Surprisingly, shown in Fig. <ref>-(b), we find that the perturbed inputs result in the equilibrium states with higher prediction entropy than the clean inputs from a distributional perspective. It is inferred that the neural dynamics with high prediction entropy are prone to give incorrect predictions. =-1 We further visualize an example of the prediction entropy along all the intermediate states of the neural dynamics in Fig. <ref>-(c), with clean or perturbed +Δ as the input. With the clean input , the prediction entropy diminishes along the corresponding neural dynamics. In contrast, with the perturbed input +Δ, the prediction entropy for each intermediate state remains high, and the resulting prediction is mistaken. To guide the perturbed dynamics toward the clean one, we propose to reduce the prediction entropy by progressively updating the input along the neural dynamics. In the next section, we provide its optimization framework. §.§.§ Input Entropy Reduction Framework Given a potentially perturbed input , the entropy reduction framework with progressive input updates is formalized as min_^[1],⋯,^[N]  H(^[N]), s.t. ^[t+1] = Solve( = f_θ(; +^[t]); ^[≤ t]), ^[t]∈ [-ϵ, ϵ]^l, =-1 where t=1,⋯,N, and {^[t]} are the updates on the input with range constraints. From (<ref>), the {^[t]} can be viewed as the controllers along the neural dynamics. Framework (<ref>) then forms an optimal control problem with the aim of guiding the neural dynamics towards reduced entropy at the final state ^[N]. As demonstrated in <cit.>, by solving the problem with Pontryagin Maximum Principle (PMP) <cit.>, the gradient descent methods are derived to obtain the optimal controllers {^[t]}. We employ the iterative projected gradient descent framework to optimize for ^[t]. Specifically, at iteration t, after ^[t+1] in the neural dynamics is obtained by Eq. (<ref>) given ^[≤ t] and ^[t], the input ^[t] is updated to reduce prediction entropy at ^[t+1] for R iterations: ^[t]_i= Proj_[-ϵ, ϵ]^l( ^[t]_i-1-β∇_ H(f_θ(^[t+1]; ^[t]_i-1)) ), with β as the step size, i=1, ⋯, R, _0^[t]=^[t], and ^[t+1]=^[t]_R forms +^[t+1]. After the updates of the input, the state ^[t+1] is re-calculated with ^[t+1] in Eq. (<ref>). Solving for ^[t] for each t can be time-consuming. In the implementation, we intervene to optimize for ^[t] every T_f states along the neural dynamics, i.e., requiring ^[pT_f+q]=^[pT_f+1], ∀ p,q∈ℕ,  p≥0,  1<q≤ T_f. The whole process of the input entropy reduction is shown in Algo. <ref>. In this way, the original neural dynamics are mounted to a regulated “orbit” from ^[t], which would crucially impact the states afterward and the predictions they give, and eventually result in reduced H(^[N]). Our method is also related to the joint optimization for inputs and states in DEQ models <cit.>. However, in this work, we do not couple the update of ^[t+1] with Eq. (<ref>) and refrain from the calculation of the joint Jacobian for and . We leave more efficient and effective regulations for neural dynamics in DEQs as future work. §.§ Loss from Random Intermediate States =-1 In addition to progressively updating the input during testing, we propose another technique for the explicit regulation of the neural dynamics in DEQ models. As shown in Eq. (<ref>), the second term reflects the difference accumulated in ^[t] from ^[t]. A straightforward approach to imposing explicit regulations on the intermediate state ^[t] is to calculate the adversarial loss using random intermediate states during AT. Formally, for the vanilla AT baselines, the loss function L in the objective is calculated using only the equilibrium state: min_θ,Φmax_Δ∈ [-ϵ, ϵ]^l L(h_Φ(^[N]), y), with the equilibrium state ^[N] satisfying Eq. (<ref>) with +Δ as the input, and y is the ground-truth label for . Our method calculates Eq. (<ref>) with random intermediates: min_θ,Φmax_Δ∈ [-ϵ, ϵ]^l𝔼_i ∈𝒰[1,N]  L(h_Φ(^[i]), y), where we randomly select intermediate states ^[i] inside the forward fixed-point solver for loss computation. In this way, all the intermediates are imposed with explicit regulations without violating the O(1) memory constraint of DEQ models. We thus expect their neural dynamics to be less deviated under attacks and exhibit higher robustness. § EXPERIMENTS Setup. We follow the settings in <cit.> of the configurations of DEQ model architecture: the large-sized DEQ with its parameter count similar to ResNet-18. The number of iterations N in the forward solver is 8. For adversarial training frameworks, we use both PGD-AT <cit.> and TRADES <cit.>. PGD-AT is used in the previous study on robust DEQ models <cit.>, while the regularization term for robustness in TRADES shares similarity with Eq. (<ref>). We experiment on CIFAR-10 <cit.> with ℓ_∞ perturbation range ϵ=8/255. The default hyperparameter setting for the Sec. <ref> method is β=2/255, R=10, and T_f=2. The detailed settings are listed in Appendix <ref>. We follow to use the white-box robustness evaluation protocol proposed in <cit.>: We use the early-state defense by selecting the intermediate state with the highest accuracy under ready-made PGD-10 to compute for predictions. As the intermediate state is non-differentiable, we adopt the proposed intermediate unrolling method that estimates the gradients used to attack the state. Specifically, the gradients used in the attacks are calculated by unrolling an intermediate state ^[i] for K_a steps: _a^[i+j] = (1-λ)_a^[i+j-1] + λ f_θ (_a^[i+j-1]; ), with _a^[i] = ^[i] and j=1,⋯,K_a, and _a^[i+K_a] is used to compute the loss and take the gradient. In our work, we provide a systematic evaluation by covering all 1 ≤ i ≤ N=8, 1 ≤ K_a ≤ 9, and λ∈{0.5, 1}. Unless specified, all of the single robustness performance that is reported “under intermediate attacks” is the minimum accuracy over 8×9×2=144 attacks in the form of Eq. (<ref>). §.§ Main Results =-1 Table <ref> shows the robustness comparisons among traditional deep networks ResNet-18 and the DEQ models with a similar amount of parameters (DEQ-Large). For the DEQ models, we use the original adversarial training framework and compose it with our methods. According to the results, Both the test-time input entropy reduction in Sec. <ref> and the training-time loss computation with random intermediates in Sec. <ref> improve DEQ model robustness over the vanilla AT baselines. Using our methods, the robustness performance of the DEQ-Large models significantly out-performs the DEQ-Large baselines in <cit.>, and even surpasses that of ResNet-18 <cit.>. Intermediate attacks are strong. As we conduct comprehensive intermediate-state attack experiments, the validated robustness is more reliable than only using ready-made attacks. Table <ref> demonstrates that the intermediate-state PGD attacks always result in a larger decrease of white-box adversarial robustness than off-the-shelf attacks at the final state. According to Table <ref>, the effect of AutoAttack <cit.> is usually stronger than the PGD-10 attacks in TRADES experiments. This is opposite to the performance reported in <cit.>, as they argue that the AutoAttack will overfit to the inaccurate gradient estimations by unrolled intermediates and result in attacks weaker than PGD-10. However, in our work, we circumvent the overly inaccurate gradient estimations by scrutinizing all possible pairs of (i, K_a) in Eq. (<ref>), leading to stronger adaptive-size PGD in AutoAttack. The composition of our methods yields the top robustness in the DEQ models trained with TRADES. The possible reason is that the regularization term in TRADES is more suitable for the regulations on Eq. (<ref>). In the following sections, we conduct further evaluation and analysis with the TRADES-trained DEQ models. §.§ Robustness Evaluation for Test-Time Defense Sec. <ref> describes the algorithm for prediction entropy reduction by iteratively updating the input along the neural dynamics. As the algorithm works at inference time, we follow the guidelines in <cit.> to evaluate its robustness. Based on the intermediate-state Eq. (<ref>) as adaptive attacks for DEQ models, we transfer the attacks across different defense methods of ours (“TRADES Baseline”, “TRADES + Sec. <ref>”, “TRADES + Sec. <ref>”, and “TRADES + Secs. <ref> and <ref>”), and further employ the adaptive-size APGD-CE and the score-based Square <cit.> attacks. It is noted that our Sec. <ref> defense operates on the neural dynamics, which lie along the forward pass of DEQ models only. Therefore, following the comments from <cit.> to <cit.>, we do not equip BPDA <cit.> with the attacks for our defense as it is unnecessary. Attack transferability. We transfer all the intermediate-state attacks among the four different defense methods based on TRADES to one another. The minimum robust accuracy for each setting is reported in Table <ref>. It is shown that for each defense method, its strongest attack is still constructed against the method itself. For a robustly-trained DEQ model (either by the TRADES framework or plus the Sec. <ref> method), the adversarial examples have similar effects whether the Sec. <ref> method is used or not. Exploiting APGD-CE and Square attacks. We select the (i, K_a) pairs in Eq. (<ref>), which form the intermediate-state attacks that trigger severe drops in accuracy with PGD-10. Specifically, (i,K_a)=(3,5) forms the strongest attacks, and the results in Table <ref> are reported under this setting as well. We then implement these (i, K_a) settings in APGD-CE and Square attacks. Table <ref> shows the effect of these attacks on our strongest defense “TRADES + Secs. <ref> & <ref>”. For Square attacks, due to the time limit, we evaluate the performance on 1,000 test samples. According to the results, the adaptive-size APGD-CE is stronger than PGD, and the defense method retains higher accuracy under Square attacks than PGD and APGD-CE attacks. These phenomena agree with the performance of deep networks <cit.>. Finally, the robustness performances among different settings are similar, indicating the robustness of our defense to the configurations in Eq. (<ref>) as well. § ANALYSIS AND DISCUSSION §.§ Robustness Improvement Along Neural Dynamics In this section, we conduct an in-depth analysis of the robustness improvement from our methods with TRADES by investigating the neural dynamics of the DEQ models. We use PGD-10 as it also reliably reflects model robustness in Table <ref>, while being faster than the adaptive-size attacks. We first plot the PGD-10 robust accuracy for all the attacks in Eq. (<ref>) at different intermediate states ^[t] with various unrolling steps K_a and λ∈{0.5, 1.0}. In Fig. <ref>-(a), we plot the lowest robust accuracy of the model under the attack constructed by unrolling ^[t] as a solid dot. The accuracy results under other attacks with ^[t] unrolling are depicted in transparency. Along the neural dynamics, it is observed that the strongest attack lies around the middle, namely, by unrolling the intermediate state of ^[3] - ^[5]. The overall robustness performance of the model is determined by the lowest accuracy among the solid dots. Our composed method of Secs. <ref> and <ref> forms the strongest defense, as the lowest accuracy it obtains is much higher than all the other methods. Opposite to <cit.>, we find that λ=0.5 in Eq. (<ref>) results in attacks always stronger than λ=1.0 (see detailed comparisons in Appendix <ref>). =-1 To validate the effectiveness of the Sec. <ref> method, we quantize the entropy reduction Δ H(^[t]) given each input from the validation set perturbed by the strongest intermediate PGD attacks. The distribution of Δ H(^[t]) for each t is illustrated in Fig. <ref>-(b). As Sec. <ref> updates the input along the neural dynamics, each ^[t] is correspondingly modified, thus leading to the difference in terms of its prediction entropy. According to Fig. <ref>-(b), Δ H(^[t])<0 for each t>1. This means the entropy at each ^[t] is reduced for t>1, which verifies the effectiveness of our method. Specifically, the largest deterioration of Δ H(^[t]) happens at ^[3], which is also around the middle of the neural dynamics. Finally, we demonstrate the effect of Sec. <ref> by plotting the relative difference of intermediate states (^[t] and ^[t]) along the neural dynamics given a clean input and the perturbed one +Δ. Shown in Fig. <ref>-(c), when trained with the loss computed with random intermediates, the DEQ model exhibits neural dynamics with the less relative difference among the clean and the perturbed inputs than the baseline. This analysis also accounts for the superiority of the Sec. <ref> method in adversarial robustness, as shown in Table <ref>. §.§ Effect of T_f and R in Sec. <ref> In this section, we ablate the effect of T_f and R in the Sec. <ref> method. During the input entropy reduction process, T_f controls the intervention frequency of input updates along the neural dynamics, and R denotes the iteration number within each intervention of input entropy reduction. According to the results in Table <ref>, a larger T_f leads to relatively lower robustness. This indicates that the Sec. <ref> method would benefit from frequent input entropy reduction. Similarly, the method becomes more effective as R increases. As the time consumption is proportional to R and inversely proportional to T_f, we set R=20 and T_f=2 in our main experiments (see Appendix <ref> for time cost analysis). §.§ Effect of Sec. <ref> During Training In Fig. <ref>, we visualize the training process of each AT baseline and its improved version with the Sec. <ref> method. The robust accuracy reported in Fig. <ref> is evaluated at the final state ^⋆. It is witnessed that the Sec. <ref> method always improves over the AT baselines in robust accuracy. The robust accuracy results obtained by TRADES are always higher than those obtained by PGD-AT through the training process. This agrees with the conclusions for deep networks <cit.>. It is also observed that the PGD-AT framework leads to faster robustness overfitting, while this effect is not obvious in the TRADES experiments. Finally, it is noted that DEQ models are by default trained with Adam optimizer <cit.>, zero weight decay, and learning rate cosine decay <cit.>. This differs from the common practice in training robust deep networks, where the optimizer is usually SGD, with weight decay and early stopping after the first time the learning rate is decayed <cit.>. While these tricks have proven to be crucial in adversarial training <cit.>, we failed to implement similar techniques in DEQ training: For example, our initial experiments show that when setting weight decay to be 5e-4, the loss becomes NaN after about 20 training epochs with PGD-AT. We leave more effective and efficient adversarial training for DEQ models as future work. §.§ Quantitative Analysis for Prediction Entropy =-1 As demonstrated in Sec. <ref>, progressively reducing the predicted entropy of the input is beneficial to the regulation of its corresponding neural dynamics. While the improved performances (shown in different tables) have proved the effectiveness of the method, in this section, we conduct additional analysis to quantitatively compare the prediction entropies between a clean input and its perturbed counterpart. We conduct the comparison using different types of attacks and adversarial training configurations. In our analysis, the adversarial inputs are generated by PGD-10 with the 144 different intermediate attacks described in the setup of Sec. <ref>. Under a certain attack, for each clean input _j in the validation set 𝒱 and its perturbed counterpart _j, their corresponding equilibrium states are denoted as _j^⋆ and _j^⋆. We propose two metrics for the quantitative comparison of prediction entropy. For a certain attack, we calculate P, the percentage of clean inputs with prediction entropy lower than their perturbed counterparts. Formally, P = 1/|𝒱|∑_j1(H(_j) < H(_j^⋆)) × 100 %, where H(·) is the prediction entropy defined in Eq. (<ref>). We also calculate Δ H, the difference of the prediction entropy averaged in the validation set between the clean inputs and their perturbed counterparts: Δ H = 1/|𝒱|∑_j(H(_j) - H(_j^⋆)). We calculate a pair of P and Δ H for each attack. For the 144 Ps and the 144 Δ Hs, we list their average, their minimum, and their maximum value in Table <ref> and Table <ref>. According to the P statistics in Table <ref>, an average of over three-quarters of clean inputs have lower prediction entropy than their perturbed counterparts under all types of attacks. Furthermore, the Δ H statistics in Table <ref> show that the averaged prediction entropy of clean inputs in the validation set is always less than that of the perturbed inputs. These two quantitative findings again verify the viability of our entropy reduction method in Sec. <ref>. § RELATED WORK §.§ Training-Time Adversarial Defense =-1 Of all the training-time adversarial defense approaches, adversarial training has proven to be the most practical and effective technique for improving adversarial robustness <cit.>. However, AT only regulates the input-output behavior of neural models, leaving the internal neural dynamics under-supervised. The most related effort of explicit regulations along the entire neural dynamics is interval bound propagation (IBP) <cit.>. IBP is a technique from the certificated robustness field that envelopes the neural dynamics of deep networks with layer-wise linear functions for robustness guarantees. However, the complicated training procedure and the lack of scalability hinder its practical use. In our work, we exploit the structural uniquenesses of DEQs to impose explicit regulations along their neural dynamics. §.§ Test-Time Adversarial Defense Recently, several test-time defense techniques have been proposed to exploit additional computes during inference time for robustness improvement <cit.>. <cit.> conduct a thorough evaluation for test-time defenses. Our method is different from the previous works, as they focus on traditional deep networks, and usually calibrate only the output behavior of the model. In contrast, we progressively update the input along the forward pass to mount the neural dynamics of DEQ models to correct “orbits” in Sec. <ref>. The fundamental difference between our work and prior arts originates from the special design of DEQ models, as they directly cast the forward process as solving the fixed-point equation iteratively. §.§ Dynamical System Perspective for Neural Models <cit.> first proposes to interpret deep networks from a dynamical system perspective, which draws the connection between residual networks and the solution of an ODE with the forward Euler method. Since then, multiple types of novel neural models have been proposed, which directly model a dynamical system in their forward pass. Among them, neural ODEs <cit.> are integrated with continuous ODE, while DEQ models <cit.> are instantiated by discrete fixed-point iteration systems. Several efforts have been made in designing robust neural ODEs by drawing inspiration from control theory <cit.>. By comparison, for DEQ models, Jacobian regularization is proposed in <cit.> by regulating only the equilibrium state to improve training stability instead of robustness. In our work, we explicitly regulate the entire neural dynamics of DEQ models to improve the adversarial robustness. § CONCLUSION In this work, we propose to reduce the prediction entropy of intermediate states along the DEQ neural dynamics with progressive input updates. We also randomly select intermediate states to compute the loss function during adversarial training of DEQ models. Our work significantly outperforms previous works on improving DEQ robustness and even surpasses strong deep network baselines. Our work sheds light on explicitly regulating DEQ and other neural models from the perspective of neural dynamics. In the future, we will continue to exploit the special properties (single layer, fixed-point structure, neural dynamics, etc.) of DEQ models to design tailored adversarial defense strategies. We will also investigate the relationship between our methods and the inexact/approximated gradient proposed for implicit models <cit.>. We also leave the validation of our methods on larger benchmarks as future work. § ACKNOWLEDGMENT We thank all of the anonymous reviewers for their constructive suggestions. This work was supported by the National Key R&D Program of China (2022ZD0160502) and the National Natural Science Foundation of China (No. 61925601, 62276152, 62236011). icml2023 § EXPERIMENT DETAILS We follow to adopt the DEQ-Large architecture used in <cit.>, which has similar parameter counts as ResNet-18. More specifically, the DEQ cell we use is the multiscale DEQ-Large with 4 scales. The numbers of head channels for each scale are 14, 28, 56, and 112. The numbers of channels for each scale are 32, 64, 128, and 256. The channel size of the final layer is 1680. We use Broyden's method as the black-box solver, with N=8 forward solver iterations. Following <cit.>, the model is trained with Adam optimizer: the initial learning rate is 0.001 with cosine decay; the Nesterov momentum is 0.98; the weight decay is 0. We follow <cit.> and <cit.> to set batch size as 96. We pretrain the DEQ models with the truncated deep networks (with standard training) for a good initialization of θ for 16000 steps, and then conduct adversarial training, until the total of 210 epochs training finishes. Following <cit.>, we use the unrolling-based phantom gradient to train the DEQ models with 5 unrolling steps <cit.>. For PGD-AT training, the perturbation range is ϵ=8/255; the step size is α=2/255; the number of PGD steps during training is 10. Additionally, for TRADES training, the 1/λ=6. We select the best model weight with the top robust accuracy on the CIFAR-10 validation set under ready-made PGD-10 (at the final state after unrolling). After obtaining the weight, we follow <cit.> to leverage early-state defenses. In our experiments, we always use the last but one intermediate state as the early state for robustness evaluation. We leverage the unrolled intermediates method for intermediate attacks, as they form consistently stronger attacks than the simultaneous adjoint method. For the Sec. <ref> method, we set β=2/255, T_f=2, and R=10 in our main experiments. All experiments are conducted on a single NVIDIA 3090 GPU. Our code is available at <https://github.com/minicheshire/DEQ-Regulating-Neural-Dynamics>. § THE ENTROPY IN DYNAMICAL SYSTEMS In this section, we provide a brief introduction to the entropy in dynamical systems. This introduction heavily relies on <cit.>. We draw connections between the concepts of the entropy in dynamical systems with DEQ models. Let (𝒳, d) be a metric space, and μ be a probability measure on it. Let α={X_1, ⋯, X_C} be a finite partition of 𝒳. Then the entropy of the partition is defined as H(α) = H({X_1, ⋯, X_C}) = -∑_i=1^Cμ(X_i)logμ(X_i). Implications: Interpreting 𝒳 as the space of ^[t] in DEQ models, the partition of 𝒳 is naturally constructed by the C classes in a classification task. μ can therefore be instantiated by the classification head in DEQ models. Let (f, μ) be an ergodic discrete dynamical system, with f:𝒳→𝒳 is the mapping from 𝒳 to 𝒳. We define the α-address of the n-orbit starting at x ∈𝒳 as ⋁_i=0^n-1f^-iα = {x: x ∈ X_i_0, fx ∈ X_i_1, ⋯, f^n-1x ∈ X_i_n-1} for some (i_0,i_1, ⋯, i_n-1). The metric entropy of f with partition α is then defined as h_μ(f, α) = lim_n →∞1/n H(⋁_i=0^n-1f^-iα). Implications: The n-orbit of {x, fx, ⋯, f^n-1x } coincides with the n-length neural dynamics of DEQ models we defined in Eq. <ref>. The α-address ⋁_i=0^n-1f^-iα corresponds to all xs that induces an n-length orbit traversing different subspaces of the partitions with the mapping f. Of all xs that comprise the α-address of the n-orbit, H(⋁_i=0^n-1f^-iα) characterizes the entropy of the address and roughly reflects how “chaotic” the f is under the partition α in the space 𝒳. While h_μ(f, α) is defined by taking the limit of n →∞, lim_n→∞ f^n-1x is exactly the equilibrium point of f starting with x. This suggests that h_μ(f, α) is defined over the entire orbit and concerned about the equilibrium state behavior. For x ∈𝒳, n ∈ℕ^+, ϵ>0, define B(x,n,ϵ) := { y ∈𝒳: d(f^ix, f^iy) < ϵ, 0≤ i < n }. With μ defined over partition α, we have h_μ (f, α) = lim sup_n→∞ -1/nlogμ B(x,n,ϵ). Specifically, if 𝒳 is a compact Riemann manifold with μ equivalent to the Riemannian measure, we have h_μ (f, α) = ∑_i max(λ_i, 0), where λ_is are the Lyapunov exponents of (f,μ). The proof is given by <cit.> and <cit.>. Implications: B(x,n,ϵ) indicates the “amount” of the neighbors of x that induce the trajectories close to that of x with f. This definition shares a similar idea with our derivation in Sec. <ref>, as we consider the deviation of the neural dynamics under clean or perturbed inputs. We want all the perturbed inputs to lie within B(x,n,ϵ) in Eq. <ref>, in this way all the orbits are regulated. This corresponds with an increase in μ B(x,n,ϵ), and from Eq. <ref> we realize that reducing the entropy h_μ (f, α) achieves this. From Eq. (<ref>) we know that reducing h_μ (f, α) is also equivalent to reducing the sum of the Lyapunov exponents of the system. Smaller Lyapunov exponents imply more stable dynamical systems <cit.>, which is equivalent to more robust neural models as demonstrated by <cit.>. Such correlations motivate us to observe and reduce prediction entropy along the neural dynamics in DEQ models. § ADDITIONAL ABLATION STUDIES §.§ Effect of λ in Eq. (<ref>) of Intermediate-State Attacks In this section, we compare the effect of setting λ to be 1.0 or 0.5 in Eq. (<ref>) as the intermediate-state attacks. We use the “TRADES+Secs.<ref> & <ref>” defense method for this study. Similar to Fig. <ref>-(a), we plot the lowest accuracy results under the attacks at each intermediate states along the neural dynamics in Fig. <ref>. It is obvious that λ=0.5 builds consistently stronger attacks than λ=1.0. We further list the strongest attacks at each intermediate state along the neural dynamics in Table <ref>. For λ=1.0, the best setting of K_a for each state ^[t] is 1. The overall strongest attack is formed by (7,1). It is noted that ^[7] is one iteration away from ^⋆ = ^[8]]. On the contrary, for λ=0.5, the strongest attack is in the middle of the neural dynamics, with (i,K_a)=(3,5), and K_a is larger than 1 under λ=0.5. It is inferred that λ=0.5 constructs intermediate attacks with more accurate gradient estimation: the estimated gradients become more accurate as the unrolling step K_a becomes larger, so that the corresponding attacks are stronger. §.§ Effect of the Jacobian Regularization Factor We also tried to train DEQ models by setting the Jacobian regularization factor proposed by <cit.> as 0.4, following <cit.>. However, shown in Fig. <ref>, we found that under PGD-AT, the training becomes unstable with suboptimal robustness performance. After increasing the factor to 2.0, we found the performance during training is improved. We suspect that a larger Jacobian regularization factor is required to stabilize the training PGD-AT. In contrast, when trained with TRADES, we found that the empirical difference between setting the factor to be 0.4 or 2.0 is marginal. It is therefore inferred that TRADES might implicitly impose more regularization during the adversarial training process, which might also account for the superiority of TRADES experiments. § RUNNING TIME ANALYSIS =-1 In this section, we compare the time cost of our methods with the vanilla adversarial training baseline for DEQ models. We first compare the training-time methods: the vanilla baseline and the Section <ref> method. The comparison is shown in Table <ref>. According to Table <ref>, our Sec.<ref> method is faster than the vanilla adversarial training. This is because we use random intermediate states for loss computation in Sec. <ref>. In this way, the fixed-point solver in the forward process runs fewer than N iterations. In contrast, the solver always runs for N iterations in the baseline. Next, we compare the running speed among different settings in the Sec. <ref> method during inference. Reported in Table <ref>, the time consumption of Sec. <ref> method is roughly proportional to R and inversely proportional to T_f. This coincides with our Algorithm <ref>. Comparing Table <ref> with Table <ref>, we set R=10 and T_f=2 to achieve the trade-off between inference speed and robustness performance.
http://arxiv.org/abs/2306.04468v1
20230607144005
Refined parameters of the HD 22946 planetary system and the true orbital period of planet d
[ "Z. Garai", "H. P. Osborn", "D. Gandolfi", "A. Brandeker", "S. G. Sousa", "M. Lendl", "A. Bekkelien", "C. Broeg", "A. Collier Cameron", "J. A. Egger", "M. J. Hooton", "Y. Alibert", "L. Delrez", "L. Fossati", "S. Salmon", "T. G. Wilson", "A. Bonfanti", "A. Tuson", "S. Ulmer-Moll", "L. M. Serrano", "L. Borsato", "R. Alonso", "G. Anglada", "J. Asquier", "D. Barrado y Navascues", "S. C. C. Barros", "T. Bárczy", "W. Baumjohann", "M. Beck", "T. Beck", "W. Benz", "N. Billot", "F. Biondi", "X. Bonfils", "M. Buder", "J. Cabrera", "V. Cessa", "S. Charnoz", "Sz. Csizmadia", "P. E. Cubillos", "M. B. Davies", "M. Deleuil", "O. D. S. Demangeon", "B. -O. Demory", "D. Ehrenreich", "A. Erikson", "V. Van Eylen", "A. Fortier", "M. Fridlund", "M. Gillon", "V. Van Grootel", "M. Güdel", "M. N. Günther", "S. Hoyer", "K. G. Isaak", "L. L. Kiss", "M. H. Kristiansen", "J. Laskar", "A. Lecavelier des Etangs", "C. Lovis", "A. Luntzer", "D. Magrin", "P. F. L. Maxted", "C. Mordasini", "V. Nascimbeni", "G. Olofsson", "R. Ottensamer", "I. Pagano", "E. Pallé", "G. Peter", "G. Piotto", "D. Pollacco", "D. Queloz", "R. Ragazzoni", "N. Rando", "H. Rauer", "I. Ribas", "N. C. Santos", "G. Scandariato", "D. Ségransan", "A. E. Simon", "A. M. S. Smith", "M. Steller", "Gy. M. Szabó", "N. Thomas", "S. Udry", "J. Venturini", "N. Walton" ]
astro-ph.EP
[ "astro-ph.EP" ]
MTA-ELTE Exoplanet Research Group, 9700 Szombathely, Szent Imre h. u. 112, Hungary, [email protected] ELTE Gothard Astrophysical Observatory, 9700 Szombathely, Szent Imre h. u. 112, Hungary Astronomical Institute, Slovak Academy of Sciences, 05960 Tatranská Lomnica, Slovakia Physikalisches Institut, University of Bern, Gesellsschaftstrasse 6, 3012 Bern, Switzerland Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Dipartimento di Fisica, Universita degli Studi di Torino, via Pietro Giuria 1, I-10125, Torino, Italy Department of Astronomy, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal Astrophysics Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE, UK Astrobiology Research Unit, Université de Liège, Allée du 6 Août 19C, B-4000 Liège, Belgium Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria Observatoire Astronomique de l'Université de Genève, Chemin Pegasi 51, Versoix, Switzerland Centre for Exoplanet Science, SUPA School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, 8042 Graz, Austria INAF, Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy Department of Astrophysics, University of Vienna, Tuerkenschanzstrasse 17, 1180 Vienna, Austria Institute of Planetary Research, German Aerospace Center (DLR), Rutherfordstrasse 2, 12489 Berlin, Germany European Space Agency (ESA), European Space Research and Technology Centre (ESTEC), Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife, Spain Institut d'Estudis Espacials de Catalunya (IEEC), 08034 Barcelona, Spain Depto. de Astrofisica, Centro de Astrobiologia (CSIC-INTA), ESAC campus, 28692 Villanueva de la Cañada (Madrid), Spain Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France Université de Paris, Institut de physique du globe de Paris, CNRS, F-75005 Paris, France Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal Center for Space and Habitability, University of Bern, Gesellschaftsstrasse 6, 3012, Bern, Switzerland Leiden Observatory, University of Leiden, PO Box 9513, 2300 RA Leiden, The Netherlands Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, 43992 Onsala, Sweden Space sciences, Technologies and Astrophysics Research (STAR) Institute, Université de Liège, 19C Allée du 6 Août, B-4000 Liège, Belgium Departamento de Astrofisica, Universidad de La Laguna, 38206 La Laguna, Tenerife, Spain Institut de Ciencies de l'Espai (ICE, CSIC), Campus UAB, Can Magrans s/n, 08193 Bellaterra, Spain Aix Marseille Univ, CNRS, CNES, LAM, 38 rue Frédéric Joliot-Curie, 13388 Marseille, France IMCCE, UMR8028 CNRS, Observatoire de Paris, PSL Univ., Sorbonne Univ., 77 av. Denfert-Rochereau, 75014 Paris, France Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 1, 85748 Garching bei München, Germany Dipartimento di Fisica e Astronomia "Galileo Galilei", Universita degli Studi di Padova, Vicolo dell'Osservatorio 3, 35122 Padova, Italy INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, 95123 Catania, Italy Astrophysics Group, Keele University, Staffordshire, ST5 5BG, United Kingdom Admatis, 5. Kandó Kálmán Street, 3534 Miskolc, Hungary Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, 1121 Budapest, Konkoly Thege Miklós út 15-17, Hungary Institut d'astrophysique de Paris, UMR7095 CNRS, Université Pierre & Marie Curie, 98bis blvd. Arago, 75014 Paris, France Centre for Mathematical Sciences, Lund University, Box 118, 22100 Lund, Sweden Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom Center for Astronomy and Astrophysics, Technical University Berlin, Hardenberstrasse 36, 10623 Berlin, Germany Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, United Kingdom INAF, Osservatorio Astrofisico di Torino, Via Osservatorio, 20, 10025 Pino Torinese TO, Italy Department of Space and Climate Physics, Mullard Space Science Laboratory, Holmbury St Mary RH5 6NT, UK Brorfelde Observatory, Observator Gyldenkernes Vej 7, DK-4340 Tølløse, Denmark Multi-planet systems are important sources of information regarding the evolution of planets. However, the long-period planets in these systems often escape detection. These objects in particular may retain more of their primordial characteristics compared to close-in counterparts because of their increased distance from the host star. HD 22946 is a bright (G=8.13 mag) late F-type star around which three transiting planets were identified via Transiting Exoplanet Survey Satellite (TESS) photometry, but the true orbital period of the outermost planet d was unknown until now. We aim to use the Characterising Exoplanet Satellite (CHEOPS) space telescope to uncover the true orbital period of HD 22946d and to refine the orbital and planetary properties of the system, especially the radii of the planets. We used the available TESS photometry of HD 22946 and observed several transits of the planets b, c, and d using CHEOPS. We identified two transits of planet d in the TESS photometry, calculated the most probable period aliases based on these data, and then scheduled CHEOPS observations. The photometric data were supplemented with ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) radial velocity data. Finally, a combined model was fitted to the entire dataset in order to obtain final planetary and system parameters. Based on the combined TESS and CHEOPS observations, we successfully determined the true orbital period of the planet d to be 47.42489 ± 0.00011 d, and derived precise radii of the planets in the system, namely 1.362 ± 0.040 R_⊕, 2.328 ± 0.039 R_⊕, and 2.607 ± 0.060 R_⊕ for planets b, c, and d, respectively. Due to the low number of radial velocities, we were only able to determine 3σ upper limits for these respective planet masses, which are 13.71 M_⊕, 9.72 M_⊕, and 26.57 M_⊕. We estimated that another 48 ESPRESSO radial velocities are needed to measure the predicted masses of all planets in HD 22946. We also derived stellar parameters for the host star. Planet c around HD 22946 appears to be a promising target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d, as a warm sub-Neptune, is very interesting because there are only a few similar confirmed exoplanets to date. Such objects are worth investigating in the near future, for example in terms of their composition and internal structure. Refined parameters of the HD 22946 system Z. Garai et al. Refined parameters of the HD 22946 planetary system and the true orbital period of planet dThis article uses data from CHEOPS programmes CH_PR110048 and CH_PR100031. Photometry and radial velocity data of HD 22946 are available at the CDS via anonymous ftp to <cdsarc.u-strasbg.fr> (130.79.128.5) or via <http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/>. Z. Garai<ref>,<ref>,<ref>, H. P. Osborn<ref>,<ref>, D. Gandolfi<ref>, A. Brandeker<ref>, S. G. Sousa<ref>, M. Lendl<ref>, A. Bekkelien<ref>, C. Broeg<ref>,<ref>, A. Collier Cameron<ref>, J. A. Egger<ref>, M. J. Hooton<ref>,<ref>, Y. Alibert<ref>, L. Delrez<ref>,<ref>, L. Fossati<ref>, S. Salmon<ref>, T. G. Wilson<ref>, A. Bonfanti<ref>, A. Tuson<ref>, S. Ulmer-Moll<ref>,<ref>, L. M. Serrano<ref>, L. Borsato<ref>, R. Alonso<ref>,<ref>, G. Anglada<ref>,<ref>, J. Asquier<ref>, D. Barrado y Navascues<ref>, S. C. C. Barros<ref>,<ref>, T. Bárczy<ref>, W. Baumjohann<ref>, M. Beck<ref>, T. Beck<ref>, W. Benz<ref>,<ref>, N. Billot<ref>, F. Biondi<ref>,<ref>, X. Bonfils<ref>, M. Buder<ref>, J. Cabrera<ref>, V. Cessa<ref>, S. Charnoz<ref>, Sz. Csizmadia<ref>, P. E. Cubillos<ref>,<ref>, M. B. Davies<ref>, M. Deleuil<ref>, O. D. S. Demangeon<ref>,<ref>, B.-O. Demory<ref>, D. Ehrenreich<ref>, A. Erikson<ref>, V. Van Eylen<ref>, A. Fortier<ref>,<ref>, M. Fridlund<ref>,<ref>, M. Gillon<ref>, V. Van Grootel<ref>, M. Güdel<ref>, M. N. Günther<ref>, S. Hoyer<ref>, K. G. Isaak<ref>, L. L. Kiss<ref>, M. H. Kristiansen<ref>, J. Laskar<ref>, A. Lecavelier des Etangs<ref>, C. Lovis<ref>, A. Luntzer<ref>, D. Magrin<ref>, P. F. L. Maxted<ref>, C. Mordasini<ref>, V. Nascimbeni<ref>, G. Olofsson<ref>, R. Ottensamer<ref>, I. Pagano<ref>, E. Pallé<ref>,<ref>, G. Peter<ref>, G. Piotto<ref>,<ref>, D. Pollacco<ref>, D. Queloz<ref>,<ref>, R. Ragazzoni<ref>,<ref>, N. Rando<ref>, H. Rauer<ref>,<ref>, I. Ribas<ref>,<ref>, N. C. Santos<ref>,<ref>, G. Scandariato<ref>, D. Ségransan<ref>, A. E. Simon<ref>, A. M. S. Smith<ref>, M. Steller<ref>, Gy. M. Szabó<ref>,<ref>, N. Thomas<ref>, S. Udry<ref>, J. Venturini<ref>, N. Walton<ref> Received September 15, 1996; accepted March 16, 1997 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Multi-planet systems are important from many viewpoints. Not only are they susceptible of relatively straightforward confirmation as bona fide planets <cit.>, they also allow intra-planetary comparisons to be made for planets which formed under the same conditions; see for example <cit.>. The majority of the known multi-planet systems were found by space-based exoplanet transit surveys. This is because, while giant hot-Jupiters are relatively easy to observe with ground-based photometry, the detection of smaller planets, for example, Earths, super-Earths, and sub-Neptunes, which are typically found in multi-planet systems, requires the precise photometry of space-based observatories such as TESS <cit.>. Mutual gravitational interactions in some multi-planet systems can provide constraints on the planet masses through transit time variations (TTVs); see for example <cit.>. Alternatively, radial velocity (RV) observations are needed to put constraints on the masses of planets <cit.>. Even where masses cannot be determined, mass upper limits can provide proof that the studied objects are of planetary origin; see for example <cit.>, <cit.>, or <cit.>. Mass determination can then help constrain the internal structure of the planet bodies, and break degeneracies in atmospheric characterisation follow-up studies. If precise planet radii are also determined from transit photometry, this allows the planet internal density to be calculated and the planetary composition to be estimated; see for example <cit.> and <cit.>. Precise planetary parameters also allow the planets to be put in the context of population trends, such as the radius <cit.> and density <cit.> valleys. Long-period planets in multiple-planet systems often escape detection, especially when their orbital periods are longer than the typical observing duration of photometric surveys (e.g. ∼ 27 d for TESS). However, detecting such planets is also important. For example, the increased distance from their host stars means that, when compared with close-in planets, they may retain more of their primordial characteristics, such as unevaporated atmospheres <cit.> or circumplanetary material <cit.>. Due to the limited observing duration of the TESS primary mission, which observed the majority of the near-ecliptic sectors for only 27 days, planets on long periods produce only single transits. However, thanks to its extended mission, TESS re-observed the same fields two years later, and in many cases was able to re-detect a second transit; see for example <cit.>. These `duotransit' cases require follow-up in order to uncover the true orbital period due to the gap, which causes a set of aliases, P ∈ (t_tr,2 - t_tr,1)/(1,2,3,…,N_max), where t_tr,1 and t_tr,2 are the first and the second observed mid-transit times, respectively. The longest possible period is the temporal distance between the two mid-transit times, P_max = (t_tr,2 - t_tr,1), and the shortest possible period is bounded by the non-detection of subsequent transits. In addition to ground-based telescopes, the CHEOPS space observatory <cit.> can be used to follow-up duotransit targets and to determine their true orbital periods and other characteristics. For example, the periods of two young sub-Neptunes orbiting BD+40 2790 (TOI-2076, TIC-27491137) were found using a combination of CHEOPS and ground-based photometric follow-up observations <cit.>. Furthermore, these combined observations uncovered the TTVs of two planets, and also improved the radius precision of all planets in the system. CHEOPS observations also recovered orbital periods of duotransits in HIP 9618 <cit.>, TOI-5678 <cit.>, and HD 15906 <cit.> systems. In the present study, we investigated the HD 22946 system with a similar aim. HD 22946 (TOI-411, TIC-100990000) is a bright (G = 8.13 mag) late F-type star with three transiting planets. The planetary system was discovered and validated only recently by <cit.>; hereafter C22. The authors presented several parameters of the system, including the radii and mass limits of the planets. They found that planet b is a super-Earth with a radius of 1.72 ± 0.10 R_⊕, while planets c and d are sub-Neptunes with radii of 2.74 ± 0.14 R_⊕ and 3.23 ± 0.19 R_⊕, respectively. The 3σ upper mass limits of planets b, c, and d were determined —based on ESPRESSO spectroscopic observations (see Sect. <ref>)— to be 11 M_⊕, 14.5 M_⊕, and 24.5 M_⊕, respectively. As TESS recorded several transits during observations in sector numbers 3, 4, 30, and 31, the discoverers easily derived the orbital periods of the two inner planets, b and c, which are about 4.040 d and 9.573 d, respectively. The orbital period of planet d was not found by C22. The authors determined its presence through a single transit found in sector number 4 and obtained its parameters from this single transit event. Its depth and the host brightness make planet d easily detectable with CHEOPS, and therefore HD 22946 was observed several times with this instrument within the Guaranteed Time Observations (GTO) programmes CH_PR110048 and CH_PR100031, with the main scientific goals being to uncover the true orbital period of planet d and to refine the parameters of the HD 22946 system based on CHEOPS and TESS observations via joint analysis of the photometric data, supplemented with ESPRESSO spectroscopic observations of HD 22946. The present paper is organised as follows. In Sect. <ref>, we provide a brief description of observations and data reduction. In Sect. <ref>, we present the details of our data analysis and our first results, including stellar parameters, period aliases of HD 22946d from the TESS data, and a search for TTVs. Our final results based on the combined TESS, CHEOPS, and RV model are described and discussed in Sect. <ref>. We summarise our findings in Sect. <ref>. § OBSERVATIONS AND DATA REDUCTION §.§ TESS data HD 22946 was observed during four TESS sectors: numbers 3, 4, 30, and 31 (see Table <ref>). The time gap between the two observing seasons is almost two years. These TESS data were downloaded from the Mikulski Archive for Space Telescopes[See <https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>.] in the form of Presearch Data Conditioning Simple Aperture Photometry (PDCSAP) flux. These data, containing 61 987 data points, were obtained from two-minute integrations and were initially smoothed by the PDCSAP pipeline. This light curve is subjected to more treatment than the simple aperture photometry (SAP) light curve, and is specifically intended for detecting planets. The pipeline attempts to remove systematic artifacts while keeping planetary transits intact. The average uncertainty of the PDCSAP data points is 310 ppm. During these TESS observing runs, 23 transits of planet b were recorded, and the transit of planet c was observed eight times in total (see more details in Table <ref>). As in C22, we also initially recognised a transit-like feature in the sector number 4 data at t_tr,1 = 2 458 425.1657 BJD_TDB through visual inspection of the light curve. Given 65%–80% of single transits from the TESS primary mission will re-transit in the extended mission sectors <cit.>, we subsequently visually inspected the light curve once the TESS year 3 data were available and found a second dip at t_tr,2 = 2 459 136.5357 BJD_TDB in the sector number 30 data with near-identical depth and duration. Given the high prior probability of finding a second transit, the close match in transit shape between events, and the high quality of the data (i.e. minimal systematic noise elsewhere in the light curve), we concluded that this signal is a bona fide transit event and that the transits in sector numbers 4 and 30 are very likely caused by the same object, that is, by planet d. Outliers were cleaned using a 3σ clipping, where σ is the standard deviation of the light curve. With this clipping procedure, we discarded 300 data points out of 61 987, which is ∼ 0.5% of the TESS data. Subsequently, we visually inspected the dataset in order to check the effect of the outlier removal, which we found to be reasonable. As TESS uses as time stamps Barycentric TESS Julian Date (i.e. BJD_TDB - 2 457 000.0), during the next step we converted all TESS time stamps to BJD_TDB. §.§ CHEOPS data HD 22946 was observed five times with the CHEOPS space telescope. This is the first European space mission dedicated primarily to the study of known exoplanets. It consists of a telescope with a mirror of 32 cm in diameter based on a Ritchey-Chrétien design. The photometric detector is a single-CCD camera covering the wavelength range from 330 to 1100 nm with a field of view of 0.32 deg^2. The payload design and operation have been optimised to achieve ultra-high photometric stability, achieving a photometric precision of 20 ppm on observations of a G5-type star in 6 hours, and 85 ppm observations of a K5-type star in 3 hours <cit.>. The CHEOPS observations were scheduled based on the existing TESS observations of planets b and c, and mainly based on the observed transit times of planet d (see Sect. <ref>). The marginal probability for each period alias of planet d was calculated using the package (see Sect. <ref>). We were not able to observe all the highest-probability aliases, because some were not visible during the two-week period of visibility. Within the program number CH_PR110048, we therefore planned to observe the three highest-probability aliases of planet d with CHEOPS, but due to observability constraints and conflicts with other observations, only two visits[A visit is a sequence of successive CHEOPS orbits devoted to observing a given target.] of planet d aliases were scheduled. Its true orbital period was confirmed during the second observation. The remaining three visits were scheduled in the framework of the program number CH_PR100031. Based on these CHEOPS observations, three transits of planet b were recorded during visits 1, 3, and 5, the transit of planet c was observed twice during visits 2 and 4, and a single transit of planet d (in multiple transit feature with planet c) was detected during the CHEOPS visit 4. Further details about these observations can be found in Table <ref>. From the CHEOPS detector, which has 1024 × 1024 pixels, a 200 × 200 pixels subarray is extracted around the target point-spread function (PSF), which is used to compute the photometry. This type of photometry product was processed by the CHEOPS Data Reduction Pipeline (DRP) version 13.1.0 <cit.>. It performs several image corrections, including bias-, dark-, and flat-corrections, contamination estimation, and background-star correction. The pipeline produces four different light-curve types for each visit, but we initially analysed only the decontaminated `' type, where the aperture radius is automatically set based on the signal-to-noise ratio (S/N). In addition to the subarrays, there are imagettes available for each exposure. The imagettes are frames of 30 pixels in radius centred on the target, which do not need to be co-added before download owing to their smaller size. We used a tool specifically developed for photometric extraction of imagettes using point-spread function photometry, called [See <https://github.com/alphapsa/PIPE>.]; see for example <cit.>. The photometry has a S/N comparable to that of photometry, but has the advantage of shorter cadence, and therefore we decided to use this CHEOPS product in this work. The average uncertainty of the data points is 160 ppm. The CHEOPS observations were processed using the dedicated data decorrelation and transit analysis software called [See <https://github.com/pmaxted/pycheops>.] <cit.>. This package includes downloading, visualising, and decorrelating CHEOPS data, fitting transits and eclipses of exoplanets, and calculating light-curve noise. We first cleaned the light curves from outlier data points using the pycheops built-in function clip_outliers, which removes outliers from a dataset by calculating the mean absolute deviation (MAD) from the light curve following median smoothing, and rejects data greater than the smoothed dataset plus the MAD multiplied by a clipping factor. The clipping factor equal to five was reasonable in our cases, which we checked visually. With this clipping procedure, we discarded 30 data points out of 3195, which is ∼ 0.9% of the CHEOPS data. The next step was the extraction of the detrending parameters. During this procedure, the software gives a list of the parameters necessary for the detrending. The most important decorrelation is subtraction of the roll-angle effect. In order to keep the cold plate radiators facing away from the Earth, the spacecraft rolls during its orbit. This means that the field of view rotates around the pointing direction. The target star remains stationary within typically 1 pixel, but the rotation of the field of view produces a variation of its flux from the nearby sources in phase with the roll angle of the spacecraft <cit.>. The extracted detrending parameters were co-fitted with the transit model (see Sect. <ref>). §.§ ESPRESSO/VLT data We acquired 14 high-resolution spectra of the host star HD 22946 using the ESPRESSO spectrograph <cit.> mounted at the 8.2 m Very Large Telescope (VLT) at Paranal Observatory (Chile). The observations were carried out between 10 February 2019 and 17 March 2019 under the observing program number 0102.C-0456 (PI: V. Van Eylen) and within the KESPRINT[See <https://kesprint.science/>.] project. We used the high-resolution (HR) mode of the spectrograph, which provides a resolving power of R ≈ 134 000. We set the exposure time to 600 s, leading to a S/N per pixel at 650 nm ranging between 120 and 243. Daytime ThAr spectra and simultaneous Fabry-Perot exposures were taken to determine the wavelength solution and correct for possible nightly instrumental drifts, respectively. We reduced the ESPRESSO spectra using the dedicated data-reduction software and extracted the RVs by cross-correlating the échelle spectra with a G2 numerical mask. We list the ESPRESSO RV measurements in Table <ref>. The average uncertainty of the RV data points is ∼ 0.00015 km s^-1. We co-added the individual ESPRESSO spectra prior to carrying out the spectroscopic analysis presented in Sect. <ref>. To this aim, we Doppler-shifted the data to a common reference wavelength by cross-correlating the ESPRESSO spectra with the spectrum with the highest S/N. We finally performed a S/N-weighted co-addition of the Doppler-shifted spectra, while applying a sigma-clipping algorithm to remove possible cosmic-ray hits and outliers. The co-added spectrum has a S/N of ∼ 900 per pixel at 650 nm. § DATA ANALYSIS AND FIRST RESULTS §.§ Stellar parameters The spectroscopic stellar parameters (the effective temperature T_eff, the surface gravity log g, the microturbulent velocity v_mic, and the metallicity [Fe/H]; see Table <ref>) were derived using the and codes, following the same methodology as described in <cit.>, <cit.>, and <cit.>. We used the latest version of the code[The last version, v2, can be downloaded at <https://github.com/sousasag/ARES>.] <cit.> to measure the equivalent widths of iron lines on the combined ESPRESSO spectrum. We used a minimisation procedure to find ionisation and excitation equilibrium and converge to the best set of spectroscopic parameters. This procedure makes use of a grid of Kurucz model atmospheres <cit.> and the radiative transfer code <cit.>. To derive the radius of the host star HD 22946, we used a Markov-Chain Monte Carlo (MCMC) modified infrared flux method. This enables us to calculate the bolometric flux using stellar atmospheric models defined by our spectral analysis to build spectral energy distributions (SEDs) that are compared with broadband fluxes and uncertainties from the most recent data releases for the following bandpasses: Gaia G, G_BP, and G_RP, 2MASS J, H, and K, and WISE W1 and W2 <cit.>. From the bolometric flux, we then determine stellar effective temperature and angular diameter; this latter is converted to a radius using the offset-corrected Gaia parallax <cit.>. We used Bayesian modeling averaging of the atlas <cit.> and phoenix <cit.> catalogues to produce a weighted averaged posterior distribution of the stellar radius in order to account for uncertainties in stellar atmospheric modelling. We find a value of R_s=1.117±0.009 R_⊙, which is in 3σ agreement with the value of 1.157 ± 0.025 R_⊙ presented by the discoverers. We finally determined the stellar mass M_s and stellar age t_s using two different sets of stellar evolutionary models, namely PARSEC[PAdova and TRieste Stellar Evolutionary Code: <http://stev.oapd.inaf.it/cgi-bin/cmd>] v1.2S <cit.> and CLES (Code Liègeois d'Évolution Stellaire), see <cit.>. More specifically, we employed the isochrone-placement algorithm developed by <cit.> to interpolate the input parameters (T_eff, [Fe/H], R_s) within pre-computed grids of PARSEC v1.2S isochrones and tracks to derive a first pair of mass and age. A second pair of mass and age values, instead, was retrieved by inputting T_eff, [Fe/H], and R_s directly in the CLES code, which generates the best-fit stellar evolutionary track following the Levenberg-Marquadt minimisation scheme, as described in <cit.>. After carefully checking the mutual consistency of the two respective pairs of outcomes through the χ^2-based methodology presented in <cit.>, we finally merged (i.e. summed) the two M_s and t_s results and obtained M_s=1.098 ± 0.040 M_⊙ and t_s=2.5 ± 1.0 Gyr. The mass parameter value of the host star agrees within the uncertainty with the value provided in the discovery paper, which is 1.104 ± 0.012 M_⊙. However, the planet host seems to be younger than previously presented by C22. The discoverers obtained a value of 5.0 ± 1.0 Gyr. More parameter values, including from this work, are compared with the discovery-paper parameter values in Table <ref>. §.§ Period aliases of HD 22946d from the TESS data In order to determine each possible period alias and to schedule CHEOPS observations of planet d, we first performed a period analysis of the available TESS data. For this purpose, we used the package[See <https://github.com/hposborn/MonoTools>.] <cit.>, which is able to model transit light curves in case of multiple transits, duotransits, and monotransits, as well as multiple systems with combinations of such candidates, with both radial velocities and transit photometry. The package calculates a marginalised probability distribution across all allowed aliases for a given transit model by combining priors for each alias. The probabilities are estimated based on two major assumptions, namely that short-period orbits are highly favoured over long-period ones due to a combination of geometric probability and window function, and that planets in multi-planet systems have low eccentricities <cit.>. More details about this software can be found in <cit.>. The TESS data described in Sect. <ref> were used during the fitting procedure using . In the case of planet b, we set as input parameters the reference mid-transit time of T_c = 2 458 385.7318 BJD_TDB, the orbital period of P_orb = 4.040330 ± 0.000010 d, the transit duration (transit width) of W = 3.4 hr, and the transit depth of D = 134 ppm. In the case of planet c, the inputs were T_c = 2 458 386.1878 BJD_TDB, P_orb = 9.573117 ± 0.000020 d, W = 3.8 hr, and D = 389 ppm. For planet d, we set as input parameters the two mid-transit times detected by TESS, namely t_tr,1 = 2 458 425.1657 BJD_TDB and t_tr,2 = 2 459 136.5357 BJD_TDB, the transit duration of W = 6.5 hr and the transit depth of D = 478 ppm. These parameters were calculated from the TESS data alone. The orbital period aliases of planet d with a probability of p > 1% are listed in Table <ref>. The software forecasted that a transit of planet d with the orbital period alias number 2 would take place on 25 October 2021, with a mid-transit time of T_c = 2 459 513.1441 BJD_TDB. This forecasted event was observed during the third CHEOPS visit (see Table <ref>), but the expected transit of planet d did not happen; only the transit of planet b was recorded that time. After this observation, we were able to exclude the period alias of P = 41.8454 d from the list of possible aliases. The next forecast predicted a transit of planet d on 28 October 2021, with a mid-transit time of T_c = 2 459 515.9338 BJD_TDB, which means that, in this case, the alias number 4 (see Table <ref>) was preferred as its true orbital period. This forecasted event was observed with CHEOPS during its fourth visit. This time, the transit of planet d was successfully detected together with a transit of planet c, confirming that the period alias of P = 47.4248 d is the true orbital period of planet d. This result also confirms that the second transit-like feature of planet d, observed by TESS in sector number 30, was a real transit event and not an instrumental artifact as considered by C22. Alternatively, the dip observed at 2 459 136.5357 BJD_TDB was a mixture of instrumental effects and the transit of planet d. With this gathered knowledge about the true orbital period of planet d, we were able to combine CHEOPS and TESS photometric observations and RV measurements in order to improve the orbital and planetary parameters of the HD 22946 system, which were previously obtained only from the TESS and RV data by the discoverers. §.§ CHEOPS, TESS, and RV combined model In order to produce accurate planetary parameters for all three planets, we built a combined model using all available data, that is, TESS photometry (described in Sect. <ref>), CHEOPS photometry (described in Sect. <ref>), and ESPRESSO RVs (described in Sect. <ref>). The combined model was built using the package[See <https://pypi.org/project/pymc3/>.] <cit.>, which performs Hamiltonian Monte Carlo (HMC) sampling, with Keplerian orbits modeled with package[See <https://pypi.org/project/exoplanet/>.] <cit.>. We used Gaussian processes (GPs) to model the stellar variability present in the TESS light curve, opting for a simple harmonic oscillator (SHO) kernel implemented in the package <cit.> and a quality factor Q=1/√(2,) as is common for quasi-periodic stellar variability. In order to speed up sampling, we binned the TESS data to 30 minute bins far from transits, keeping 2 minute data near transit. As we have reasonable prior knowledge from theoretical analyses for the expected stellar limb-darkening (LD) parameters for HD 22946, we used these as priors in the analysis. We used the quadratic LD law and interpolated tables of coefficients calculated for the TESS <cit.> and CHEOPS <cit.> passbands using the derived stellar parameters of T_eff = 6169 K and log g = 4.47 (cgs). In order to guard against systematic errors, we inflated the σ for each parameter prior to 0.1. Even though the light curves for HD 22946 have fewer systematic features than the DRP light curves, they can still include flux variations due to the influence of various external factors. Therefore, we can improve the light curve by decorrelating the flux data against metadata generated for the instrument and target. To decipher which decorrelation vectors provide improvement, we ran an initial model for each CHEOPS visit using all available ancillary data – sin and cos of rollangle, background flux, x and y centroid positions, onboard temperature and time (which also fits short-timescale stellar variability). These parameters are normalised to have μ=0.0 and σ=1.0, and decorrelation parameters are given normal priors with μ=0.0 and σ set by the root-mean-square (RMS) noise for each CHEOPS visit. For each visit model, we also included parameters for any planetary transits present in order to ensure the transits would not bias the model. After HMC sampling, we assessed each decorrelation parameter using the average and standard deviations, keeping only those parameters with a Bayes Factor of BF > 1. Despite this detrending, shorter-timescale variation can also be present as a function of roll angle (φ). Pure detrending against sin and cos of roll angle removes the largest amplitude systematic trends at low frequencies. These are those closest in timescale to the transit feature, and so a simpler detrending technique for such timescales guards against over-fitting of the transit. However, the CHEOPS light curve typically also contains systematic noise correlated with roll angle that is at a lower amplitude and higher frequency. This is not therefore adequately removed by simple sin and cos decorrelation. It is this noise that a more flexible GP is better able to model. We therefore also included a GP to model the variation of flux with roll-angle effects. To do this, we first found any potential large jumps in φ and made sure the time series was continuous between these jumps (i.e. by moving the zero point and `wrapping around'). We then transformed the input data such that it is continuous in x —by sorting by φ rather than time. Once again, we used a SHO kernel from with quality factor Q set at 1/√(2). As we expected the morphology of the variations to be preserved for all CHEOPS visits, we used a single shared kernel. We found that the linear decorrelation is the most important, decreasing the log likelihood by a factor of 1400, but the GP is responsible for a reduction of a further 450, which means that use of a GP to model roll-angle flux behaviour is well justified. As multi-planet systems typically have low eccentricities e <cit.>, and we lack the high number of RVs capable of resolving any differences in e, we chose to fit only circular orbits. In order to guard against unphysical negative values, we used broad log-normal priors for the key transit and RV amplitude parameters, that is, for R_p/R_s (planet-to-star radius ratio) and K (RV semi-amplitude). The quantities derived in Sect. <ref> are used as priors on the stellar parameters in the model. For all datasets —CHEOPS, TESS, and ESPRESSO—, we included a jitter term using a wide log-normal prior. We then sampled the combined model using the function, which is specifically written for astrophysical applications, and allows us to group independent dataset parameters (e.g. the CHEOPS visit-specific decorrelation parameters) together, thereby speeding up sampling greatly. We used ten chains, tuning each for 1300 steps before sampling for a further 1800, resulting in 18 000 unique samples. The sample have effective sample sizes in the thousands, and the gelmin-rubin statistics are below 1.01 for all parameters, suggesting they are sufficiently uncorrelated and unbiased. The full list of fitted GP hyperparameters and detrending parameters with the corresponding best-fitting values can be found in Appendix <ref>. The best-fitting and derived parameters of the system are described and discussed in Sect. <ref>. §.§ Search for transit-timing variations In order to look for potential TTVs, we also ran a combined model using unconstrained timing for each planetary transit thanks to the function of , and an independent analysis using the software[See <https://www.allesfitter.com/home>.] <cit.>, applying a nested sampling fit. Although C22 already performed such an analysis and found no obvious sign of TTVs in the system, we repeated this procedure, but in this case using the CHEOPS data as well. This means mainly that we included three transits of planet d in the analysis and used a longer time baseline. We used the same dataset as in Sect. <ref>, which was co-fitted with a GP using the kernel in both cases. All planetary and system parameters were fixed as derived previously, only the GP hyperparameters, the detrending parameters, and the observed-minus-calculated (O-C) parameters for individual mid-transit times were fitted. Both solutions are consistent with a linear ephemeris, which means we did not find any indication of a quadratic trend in the data, in agreement with the conclusion made by the discoverers. As an illustration, the obtained O-C diagram of the mid-transit times for planets b, c, and d from the package is depicted in Fig. <ref>. We can see that the O-C values are scattered around O-C = 0.0 d, which means that no significant TTVs are present in the system. § FINAL RESULTS AND DISCUSSION The best-fitting and derived parameters from the combined model are listed in Table <ref>, and the model posteriors of the host star are summarised in Appendix <ref>. The fitted TESS light curves from sector numbers 3, 4, 30, and 31 are depicted in the panels of Figs. <ref> and <ref>. The CHEOPS individual observations overplotted with the best-fitting models are shown in the panels of Fig. <ref>. The RV observations fitted with a spectroscopic orbit are depicted in Fig. <ref>. Here, we present new ephemerides of the planetary orbits, which we calculated based on the combined model. Thanks to the combined TESS and CHEOPS observations, we were able to improve the reference mid-transit times and the orbital periods of the planets compared to the discovery values. C22 derived the orbital period parameter values of P_orb,b = 4.040301^+0.000023_-0.000042 d and P_orb,c = 9.573096^+0.000026_-0.000023 d, and expected an orbital period of P_orb = 46 ± 4 d for planet d, which was estimated based on the transit duration and depth along with stellar mass and radius through Kepler's third law, assuming circular orbits. We confirmed this prediction, finding an orbital period for planet d of P_orb = 47.42489 ± 0.00011 d. The improved ratios of the orbital periods are P_orb,c/P_orb,b = 2.37 and P_orb,d/P_orb,c = 4.95. Based on the Kepler database, the adjacent planet pairs in multiple systems show a broad overall peak between period ratios of 1.5 and 2, followed by a declining tail to larger period ratios. In addition, there appears to be a sizeable peak just interior to the period ratio 5 <cit.>; therefore, we can say that the period ratios in HD 22946 fall into statistics and the seemingly large orbital gap between planets c and d is not anomalous. In the combined model, we determined the impact parameter b, which is the projected relative distance of the planet from the stellar disk centre during the transit midpoint in units of R_s. Converting these parameter values to the orbit inclination angle values we can obtain i = 88.90^+0.16_-0.05 deg, i = 88.52^+0.08_-0.07 deg, and i = 89.54^+0.02_-0.03 deg for planets b, c, and d, respectively. For comparison, we note that the corresponding discovery values are i_b = 88.3^+1.1_-1.2 deg and i_c = 88.57^+0.86_-0.53 deg. The inclination angle of planet d was not determined by C22. According to the improved parameter values, it seems that only the orbits of planets b and c are well aligned. Planet d is probably not in the same plane as planets b and c. Based on the combined TESS and CHEOPS photometry observations, we redetermined the radii of the planets, which are 1.362 ± 0.040 R_⊕, 2.328 ± 0.039 R_⊕, and 2.607 ± 0.060 R_⊕ for planets b, c, and d, respectively. The CHEOPS observations are an added value, because compared to the corresponding parameter values presented in C22 (R_p,b = 1.72 ± 0.10 R_⊕, R_p,c = 2.74 ± 0.14 R_⊕, and R_p,d = 3.23 ± 0.19 R_⊕), there is a noticeable improvement in radius precision. Using TESS and CHEOPS photometry observations, the uncertainties on the planet radius parameter values were decreased by ∼ 50%, 68%, and 61% for planets b, c, and d, respectively. We also note that the parameter values from this work are in stark contrast to those derived by C22; these authors found significantly larger radii, that is, larger by ∼ 21%, 15%, and 19% for planets b, c, and d, respectively. We believe this may be due to a misunderstanding of the function in . The function requires the planetary radius R_p in solar radii rather than the planet-to-star radius ratio R_p/R_s. When misused, the result is an inflation of R_p/R_s and R_p values by a factor of R_s/R_⊕, which in this case is a factor of about 15%–21%. This mistake can be seen most clearly in C22, when comparing the models shown in Figure 5 with the implied depths in Table 4 (likely derived from the radius ratio), which are inflated by this factor. Such a mistake was also evident during the reanalysis of the BD+40 2790 (TOI-2076) system <cit.>. According to the radius valley at ∼ 1.5 - 2.0 R_⊕, which separates super-Earths and sub-Neptunes <cit.>, and based on the refined planet radii, we find that planet b is a super-Earth, and planets c and d are similar in size and are sub-Neptunes, in agreement with C22. It is well known that small exoplanets have bimodal radius distribution separated by the radius valley. Potential explanations focus on atmospheric-escape-driven mechanisms, such as photo-evaporation; see for example <cit.>. The models showed that those planets that have radius below 1.5 R_⊕ were planets that initially had hydrogen/helium atmospheres, but ultimately lost them due to atmospheric escape, while those just above 2.0 R_⊕ had hydrogen/helium atmosphere masses of ∼ 1% of the core mass. Having HD 22946 planets on either side of the valley means that planet b could be a photo-evaporated version of planets c and d. Recently, <cit.> presented a brand new approach, arguing that the density of planets might provide more information than planet radii alone and proposing that a density gap separates rocky from water-rich planets. For M dwarf systems, these authors found that rocky planets form within the ice line while water worlds formed beyond the ice line and migrated inwards. Given that theoretical models predict similar results for stars of other types, this scenario could also be possible in the case of the planets orbiting HD 22946. Due to the low number of RVs, here we present only the 3σ upper limits for the planet masses in agreement with the discoverers. C22 obtained the 3σ upper mass limits of about 11 M_⊕, 14.5 M_⊕, and 24.5 M_⊕ for planets b, c, and d, respectively, from the same spectroscopic observations. The 3σ upper limits for the planet masses from this work are M_p,b = 13.71 M_⊕, M_p,c = 9.72 M_⊕, and M_p,d = 26.57 M_⊕. Similarly to the discoverers, we obtained very different upper mass limits for planets c and d, although they have similar planet radii, which could be due to a somewhat different internal structure of these planets. Applying the relations of <cit.> and <cit.>, we also re-estimated the planet masses, which were previously forecasted by the discoverers as 6.29 ± 1.30 M_⊕, 7.96 ± 0.69 M_⊕, and 10.53 ± 1.05 M_⊕ for planets b, c, and d, respectively. The improved parameter values are presented in Table <ref>. Furthermore, taking into account the estimated planet masses calculated based on the relations of <cit.>, we predicted the number of additional RV measurements required to achieve a 3σ detection on each mass using the [See <http://maestria.astro.umontreal.ca/rvfc/>.] (; see <cit.>), and the (Wilson et al. in preparation). Based on these simulations, we obtained that another 27, 24, and 48 ESPRESSO RVs are needed to measure the predicted masses of planets b, c, and d, respectively. The expected RV semi-amplitudes assuming the estimated planet masses are K_b = 1.10 ± 0.12 m s^-1, K_c = 2.08 ± 0.10 m s^-1, and K_d = 1.46 ± 0.08 m s^-1. C22 also probed the planets from the viewpoint of future atmospheric characterisation using the transmission spectroscopy metric (TSM); see Eq. 1 in <cit.>. The authors obtained the TSM values of 65 ± 10, 89 ± 16, and 67 ± 14 for planets b, c, and d, respectively. We revised these values based on the results from the present work. The improved TSM values (see Table <ref>) do not satisfy the recommended value of TSM > 90 for planets with a radius of 1.5 < R_p < 10 R_⊕. On the other hand, given that this threshold is set very rigorously, in agreement with the discoverers, we can note that planet c could be a feasible target for transmission spectroscopy observations with future atmospheric characterisation missions, such as the planned Ariel space observatory <cit.>. Finally, we discuss the relevance of planet d among the known population of similar exoplanets. HD 22946d represents a warm sub-Neptune. Based on the NASA Exoplanet Archive[See <https://exoplanetarchive.ipac.caltech.edu/index.html>.] <cit.>, there are 5272 confirmed exoplanets up to 22 February 2023, but only 63 planets out of 5272 are sub-Neptune sized (1.75 < R_p < 3.5 R_⊕) and transiting bright stars (G ≤ 10 mag). Only 7 planets out of 63 have orbital periods longer than 30 days and only 4 planets out of 7 have an equilibrium temperature of below 550 K. Three planets have a lower insolation flux than planet d, namely TOI-2076d <cit.>, HD 28109d <cit.>, and HD 191939 <cit.>. HD 22946d is therefore an interesting target for future follow-up observations. One of the questions to be answered in the near future is the composition and internal structure of sub-Neptune-type planets. Using CHEOPS observations, we determined the radius of planet d with high accuracy. Its true mass could be determined with another 48 ESPRESSO RV measurements according to the estimate we present above. A combination of mass and radius gives the overall density, which will be an important step forward towards understanding sub-Neptunes. § CONCLUSIONS Based on the combined TESS and CHEOPS observations, we refined several parameters of the HD 22946 planetary system. First of all, we improved the ephemerides of the planetary orbits in comparison with the discovery values. We can confirm that planets b and c have short orbital periods below 10 days, namely 4.040295 ± 0.000015 d and 9.573083 ± 0.000014 d, respectively. The third planet, HD 22946d, has an orbital period of 47.42489 ± 0.00011 d, which we were able to derive based on additional CHEOPS observations. Furthermore, based on the combined TESS and CHEOPS observations, we derived precise radii for the planets, which are 1.362 ± 0.040 R_⊕, 2.328 ± 0.039 R_⊕, and 2.607 ± 0.060 R_⊕ for planets b, c, and d, respectively. On the one hand, we can confirm the conclusion of the discoverers that the planetary system consists of a super-Earth, and planets c and d are sub-Neptunes. On the other hand, we find the planet radii values to be in tension with the values presented in the discovery paper, which is very probably due to misuse of the software by the discoverers. The low number of ESPRESSO RV measurements allowed us to derive only the 3σ upper limits for the planet masses, which are 13.71 M_⊕, 9.72 M_⊕, and 26.57 M_⊕ for planets b, c, and d, respectively. We also investigated the planets from the viewpoint of possible future follow-up observations. First of all, we can conclude that more RV observations are needed to improve the planet masses in this system. The applied spectroscopic observations allowed us to derive precise stellar parameters of the host star and to fit an initial spectroscopic orbit to the RV data, but there is ample room for improvement in this way. We estimated that another 48 ESPRESSO RVs are needed to measure the predicted masses of all planets in HD 22946. Planet c could be a suitable target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d as a warm sub-Neptune is very interesting, because there are only a few similar confirmed exoplanets to date. Thanks to the synergy of TESS and CHEOPS missions, there is a growing sample of planets, such as HD 22946d. Such objects are worth investigating in the near future, for example in order to investigate their composition and internal structure. Finally, we can mention that future photometric and/or spectroscopic observations could also be oriented to searching for further possible planets in this system. We thank the anonymous reviewer for the helpful comments and suggestions. CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. ZG acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, the PRODEX Experiment Agreement No. 4000137122 between the ELTE Eötvös Loránd University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the VEGA grant of the Slovak Academy of Sciences No. 2/0031/22, the Slovak Research and Development Agency contract No. APVV-20-0148, and the support of the city of Szombathely. GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, a PRODEX Institute Agreement between the ELTE Eötvös Loránd University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the Lendület LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Szombathely. ABr was supported by the SNSA. ACC acknowledges support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. B.-O. D. acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (project Four Aces; grant agreement No 724427). It has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021_200726. DG gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 "Gaseousor rocky? Unveiling the nature of small worlds". This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127). This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. IRI acknowledges support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grant PGC2018-098153-B-C33, as well as the support of the Generalitat de Catalunya/CERCA programme. This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche. KGI and MNG are the ESA CHEOPS Project Scientists and are responsible for the ESA CHEOPS Guest Observers Programme. They do not participate in, or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme. The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Program, and by the University of Liège through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation; L.D. is an F.R.S.-FNRS Postdoctoral Researcher. LMS gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 ‘Gaseous or rocky? Unveiling the nature of small worlds’. This project was supported by the CNES. MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18). M.G. is an F.R.S.-FNRS Senior Research Associate. ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP2_194576. NAW acknowledges UKSA grant ST/R004838/1. This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalizacão by these grants: UID/FIS/04434/2019, UIDB/04434/2020, UIDP/04434/2020, PTDC/FIS-AST/32113/2017 & POCI-01-0145-FEDER- 032113, PTDC/FIS-AST/28953/2017 & POCI-01-0145-FEDER-028953, PTDC/FIS-AST/28987/2017 & POCI-01-0145-FEDER-028987, O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT. PM acknowledges support from STFC research grant number ST/M001040/1. We acknowledge support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grants ESP2016-80435-C2-1-R, ESP2016-80435-C2-2-R, PGC2018-098153-B-C33, PGC2018-098153-B-C31, ESP2017-87676-C5-1-R, MDM-2017-0737 Unidad de Excelencia Maria de Maeztu-Centro de Astrobiologí­a (INTA-CSIC), as well as the support of the Generalitat de Catalunya/CERCA programme. The MOC activities have been supported by the ESA contract No. 4000124370. SH gratefully acknowledges CNES funding through the grant 837319. S.C.C.B. acknowledges support from FCT through FCT contracts nr. IF/01312/2014/CP1215/CT0004. S.G.S. acknowledge support from FCT through FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). ACC and TW acknowledge support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. V.V.G. is an F.R.S-FNRS Research Associate. XB, SC, DG, MF and JL acknowledge their role as ESA-appointed CHEOPS science team members. YA and MJH acknowledge the support of the Swiss National Fund under grant 200020_172746. LBo, VNa, IPa, GPi, RRa and GSc acknowledge support from CHEOPS ASI-INAF agreement n. 2019-29-HH.0. NCS acknowledges support from the European Research Council through the grant agreement 101052347 (FIERCE). This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalização by these grants: UIDB/04434/2020; UIDP/04434/2020. AT thanks the Science and Technology Facilities Council (STFC) for a PhD studentship. P.E.C. is funded by the Austrian Science Fund (FWF) Erwin Schroedinger Fellowship, program J4595-N. aa § ADDITIONAL TABLES
http://arxiv.org/abs/2306.12122v2
20230621091251
Genuine N-wise Quantum Incompatibility in a High-Dimensional System
[ "Xiaolin Zhang", "Rui Qu", "Zehong Chang", "Yunlong Wang", "Zhenyu Guo", "Min An", "Hong Gao", "Fuli Li", "Pei Zhang" ]
quant-ph
[ "quant-ph" ]
Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China [email protected] Ministry of Education Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China State Key Laboratory of Applied Optics ,Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033,China Quantum incompatibility is referred as the phenomenon that some quantum measurements cannot be performed simultaneously, and is also used in various quantum information tasks. However, it is still a challenge to certify whether a given set of multiple high-dimensional measurements respects a specific structure of incompatibility. To address this problem, we propose a modified quantum state discrimination protocol that decomposes complex compatibility structures into pair-wise ones and employs noise robustness to witness incompatibility structures. Our method is capable of detecting genuine n-wise incompatibility and some specific general compatibility structures, as demonstrated by our experimental verification of incompatibility structures of 4 mutually unbiased bases in a qutrit system. The experimental results show that our approach is a direct and intuitive tool to witness incompatibility structures in high-dimensional multi-measurement scenarios. Genuine N-wise Quantum Incompatibility in a High-Dimensional System Pei Zhang July 31, 2023 =================================================================== Introduction.—In quantum physics, some observables or measurements cannot commute with each other, which implies that one cannot acquire precise information from them simultaneously <cit.>. This feature is known as quantum incompatibility, which gives a notion of nonclassicality for quantum measurements or observables. Recent advances have showed that quantum incompatibility is crucial to many applications, including quantum cryptography <cit.>, contextuality <cit.>, Einstein-Podolsky-Rosen steering <cit.>, Bell nonlocality <cit.> and quantum computing <cit.>. These applications motivate the attempt to investigating quantum incompatibility. Fine firstly investigated the relation between measurement incompatibility and Bell nonlocality, demonstrating that only incompatible measurements can lead to the violation of Bell inequalities in the scenario where two parties use dichotomic measurements <cit.>. Later, Wolf et al. proved that every pair of incompatible measurements can be used to violate the Clauser-Horne-Shimony-Holt inequality <cit.>. Moreover, methods for certification of incompatibility in qubit systems, such as Bell inequalities, have been proposed <cit.>, and the quantum state discrimination (QSD) protocol has recently been investigated as a witness for pair-wise measurement incompatibility <cit.>. Finally, certifying whether a set of multiple high-dimensional measurements respects a given type of incompatibility has been a demanding task as it is required to produce a specific type of quantum correlations <cit.>. In this work, we show that it is feasible to certify a set of measurements respecting a given type of incompatibility structure by ruling out corresponding compatibility structures. Our approach is based on the intuition that complex compatibility structures can be decomposed by pair-wise ones and a set of measurements violating specific compatibility structures has limited robustness to noise. Firstly, we introduce the concepts of general compatibility structures and illustrate genuine n-wise incompatibility with an example of 3 noisy Pauli measurements in a qubit system. Secondly, we propose a modified QSD protocol for general compatibility structures and bound the noise robustness of measurements violating compatibility structures in a semidefinite programming (SDP) formulation. Furthermore, we discuss the cases of genuine n-wise incompatibility of d-dimensional mutually unbiased bases (MUBs) with adjustable probability distributions and perform a corresponding experiment in a qutrit system. Finally, the feasibility and superiority of our method is demonstrated by the experimental results, providing evidence that this tool can be used to witness incompatibility structures in high-dimensional multi-measurement scenarios. General compatibility structures and genuine n-wise incompatibility.—In quantum mechanics, a general measurement M is described by a set of positive operator-valued measurements (POVMs) M_a≥ 0, where ∑ _aM_a=𝕀 (a=1,⋯,o labels the outcome of the measurement and 𝕀 is the identity operator). A set of m quantum measurements {M _x}_x=1^m, is said to be fully compatible if and only if there is a set of parent measurement operators {G_λ} such that M_a | x=∑_λp(a | x,λ)G_λ ∀ a,x, where p(a | x,λ) ≥ 0 and ∑_ap(a | x,λ)=1 ∀ x,λ. Otherwise, these measurements are incompatible. In other words, a set of compatible measurements can always be implemented simultaneously by employing their parent measurement {G_λ} and post-processing the results according to the conditional probability distributions p(a | x,λ). When we consider the incompatibility of multiple measurements, partial notions of incompatibility may appear, i.e., some measurements which are pairwise compatible, but not fully compatible as a complete set <cit.>. Hence, it becomes crucial to explore different structures of incompatibility. One feasible approach to witness incompatibility structures is to rule out corresponding compatibility structures <cit.>. In order to discuss this notion more specifically, we denote 𝒞 = [C_1,C_2,⋯,C_N] as general compatibility structures and {J_a | x^C_i} as sets of measurements respecting the compatibility structures {C_i}. A set of measurements {M_a | x} is 𝒞-incompatible when it cannot be written as a convex combination of measurements that follow the compatibility structures [C_1,C_2,⋯,C_N]. More precisely, a set of measurements {M_a | x} is 𝒞-incompatible if it cannot be written as M_a | x = ∑_i p_iJ_a | x^C_i ∀ a,x, where {p_i} is the probability distribution of the compatibility structure C_i. This approach allows for testing if there is a special type of compatibility structures in a given set of measurements. A set of measurements is genuinely n-wise incompatible when it cannot be written as a convex combination of measurements that are (n-1)-wise compatible on different partitions <cit.>. It is called genuine incompatibility because it exclude all possible 2-wise compatibility structures that are the basic structures for quantum incompatibility. Here we denote 𝒞_n^m as n-wise compatibility structures in a set of m measurements and 𝒞_n^n=[C_n^(s,t,⋯,r)] as a particular n-wise compatibility structure with measurements (s,t,⋯,r) in it. Obviously, there are spontaneously m!/n! groups of n measurements in different n-wise compatibility structures (See Fig. (<ref>)(a)). Therefore, a set of n measurements is genuinely n-wise incompatible when it cannot be written as: M_a | x=J_a | x^C_n^(s,t,⋯,r)=∑_i^n p_(s,t,⋯) J_a | x^C_n-1^(s,t,⋯), where p_(s,t,⋯,r) is the probability distribution of the corresponding group of n measurements (See Fig. (<ref>)(b)). Here, we illustrate these concepts with an example of three noisy Pauli measurements in a qubit system: M_a | x^η:=η_xΠ_a | x+(1-η_x)𝕀/2, where x=1,2,3 refers to each Pauli measurements (X,Y,Z), respectively, Π _a | x are their eigenprojectors, and η_x is the corresponding sharpness. Referring to Eq. (<ref>), these three measurements are fully compatible (C_full^(1,2,3)) when they have a common parent measurement for ∑_i^3η_i^2≤ 1 <cit.>, and pairwise compatible ([C_2^(1,2),C_2^(1,3),C_2^(2,3)]) when two of them have a parent measurement for ∑_i^2η_i^2≤ 1 <cit.>. According to Eq. (<ref>), this set of measurements can be written as a convex combination of sets where measurements are pairwise compatible for ∑_i^3η_i≤√(2)+1 (See Fig. <ref>): M_a | x = p_(1,2)J_a | x^C_2^(1,2)+p_(1,3)J_a | x^C_2^(1,3)+p_(2,3)J_a | x^C_2^(2,3). Therefore, it is only for ∑_i^3η_i > √(2)+1 that they are genuinely 3-wise incompatible. Similar with the bound √(2)+1 for compatibility structure 𝒞_3^3 of three noisy Pauli measurements, the bound of n-wise compatibility serves as a hyperplane for genuine n-wise incompatibility, where any violation beyond this hyperplane will exclude all possible (n-1)-wise compatibility structures. Moreover, (n-1)-wise compatibility structures can always be decomposed by (n-2)-wise compatibility structures, and the same is applied for (n-2)-wise compatibility structures. Using this approach, we can transform a complex compatibility structure into a convex combination of 2-wise compatibility structures, which are easy to witness in the QSD protocol. Furthermore, full compatibility structures are the intersections of pairwise compatibility structures and are automatically ruled out with pairwise ones, which implies that there is no compatibility structures in genuine n-wise incompatibility structures. To calculate the bound of genuine n-wise incompatibility structures, we transform this problem into the formulation of SDP <cit.>. Construction of genuine n-wise incompatibility hyperplanes via a modified QSD protocol.—In a modified QSD protocol, Alice possesses a set of quantum measurements denoted as { M_a | x}_a,x, (where x = 1,⋯ ,m, a = 1,⋯ ,o), and shares with Bob the probability distribution of measurements and operators q(x) and q(a | x), where q(x)q(a | x)=q(a,x). Then Bob prepares a set of ensembles {ε _x}_x,x = 1,⋯ ,m with probability distribution q(x), and sends quantum states ε _x={ρ _a | x,q(a | x)}_x with label a and x to Alice. Upon receiving the state and information from Bob, Alice's task is to identify the label of the state, i.e., to correctly identify a. Firstly, Alice performs { M_a | x}_a,x on the states from corresponding ensembles {ε _x}_x, and evaluates the average probability of correctly guessing the state a as: P_g({ε _x},{ M_ x}) =max_{ε _x}∑ _x,a q(a,x)Tr[ρ _a | xM_a | x], where Bob optimizes his ensembles {ε _x} after every round. Then Alice performs fixed measurements {G}={G_z^(s,t)}_z with its probability distributions p(z) on every pair of ensembles {ε _s} and {ε _t}, and evaluates the average correct guessing probability as: P_g^C({ε _s},{ε _t})= max_{G}∑_x,a q(a,x) ∑_zp(z) Tr[ρ _a | xG_z^(s,t)] x=s,x=t, where {G} has the same effect with the parent measurement in Eq. (<ref>) and is optimized after every round to achieve the maximal probability that compatible measurements can achieve. Since incompatibility measurements can provide advantages over compatible measurements in QSD protocols <cit.>, a witness W_2^(s,t) can be defined to detect the 2-wise incompatibility structure (C_2^(s,t)) between measurements M_s and M_ t as follows: C_2^(s,t):W_2^(s,t) = P_g({ε _x},{ M_ x})-P_g^C({ε _s},{ε _t}) x=s,x=t, where M_s and M_ t are compatible when W_2^(s,t)≤ 0, and incompatible otherwise. Recently, the incompatibility noise robustness has been used to detect pairwise incompatibility <cit.>. By leveraging the fact that noise robustness can form a hyperplane for compatibility structures, we reformulate the witness for pairwise compatibility into a SDP formulation (See S1 in Supplemental Material): maximize R = ∑_xq(x)η_x s.t. M_a | x^η_x = η_xM_a | x+(1-η_x)Tr[M_a | x]𝕀/d M_a | x^η_x = J_a | x^C_2^(s,t) x=s,x=t, where η_x is the sharpness of the measurement M_ x, the balanced noise robustness R serves as a hyperplane separating compatibility structures and incompatibility structures, and J_a | x^C_2^(s,t) is a set of measurements satisfies the compatibility case in Eq. (<ref>). After the optimization, these two measurements M_s and M_t are compatible when R=1, and incompatible when R<1. Referring to the transition from n-wise compatibility structures to 2-wise compatibility structures, we can obtain the hyperplane for genuine n-wise incompatibility as follows (See S2 in Supplemental Material): max_{p} R = ∑_xq(x)η_x s.t. M_a | x^η_x = J_a | x^C_n^(s,t,⋯,r) J_a | x^C_n^(s,t,…,r) = ∑ p_(s,t,⋯,r) (J_a | x^C_n-1^(s,t,⋯) + J_a | x^con(r)) = ∑_i p_(s,t) (J_a | x^C_2^(s,t)+J_a | x^con(⋯,r)), where {p} is the partial distribution for every C_2^s,t and con(x) represents constraints for the measurement M_x to ensure the validity of the physics system. Using this approach, a set of measurements are genuinely n-wise incompatible when R<1 and have compatibility structures when R=1. In this way, we establish a hyperplane based on balanced noise robustness R for genuine n-wise incompatibility, which helps exclude all possible compatibility structures in a set of n measurements. Furthermore, there is an intriguing case where a specific compatibility structure (for example, C_3^(1,2,3)) may exist that surpasses the hyperplane if we restrict corresponding probability distribution to 0 (p_(1,2,3)=0) before the optimization. By an extension of this logic, the hyperplane constructed by Eq. (<ref>) can be used as a witness for general compatibility structures, as long as the probability distribution is predetermined. The genuine n-wise incompatibility hyperplane offers serval advantages. Firstly, it is easy to test whether a set of arbitrary POVMs with different probability distributions is genuinely incompatible or not without entanglement, thereby extending the range of applications compared to previous incompatibility witnesses <cit.>. Secondly, it can be applied to multiple high-dimensional POVMs, which is advantageous for investigating complex compatibility structures of quantum measurements beyond qubit systems <cit.>. Finally, as a hyperplane, it can be represented as visible geometrical polytopes, which facilitates intuitive understanding of quantum incompatibility structures of multiple measurements. Bounding the genuine incompatibility for mutually unbiased bases.—An interesting special case is that an analytical solution is admitted when the given measurements are MUBs. In a symmetrical case of a set of m d-dimensional MUBs, considering genuine n-wise incompatibility, there are m!/n! groups of n measurements with probability distributions q(x)=1/n ,q(a | x)=1/d ∀ a,x in every group. Then the genuine n-wise incompatibility hyperplane in every group can be written as (see S3 in Supplemental Material): 1/n(∑_xη_x) ≤√(d)-1/n(d-1)+n-1/n. When m=4 and q(a | x)=1/3 ∀ a,x, there are four hyperplanes for genuine 3-wise incompatibility and one hyperplane for genuine 4-wise incompatibility. These hyperplanes are respectively bounded by √(3)+3/6 and √(3)+5/8. In the asymmetric case where the probability distributions are given by q(1)=1/6,q(2)=1/3,q(3)=1/2. The genuine 3-wise incompatibility hyperplane is given by ∑_xη_x≤ 0.864 and if we predetermine p_(1,2)=0 (which allows for C_2^(1,2)), then the bound becomes 0.854. This implies that we can capture the existence of C_2^1,2 in this set between the gap of 0.864 and 0.854. To confirm the feasibility and superiority of our methods, we present an experimental verification of genuine 3-wise and 4-wise incompatibility of 4 MUBs in a qutrit system, and capture a specific compatibility structure with other compatibility structures ruled out. Experiments and results.—We use photons carrying orbital angular momentum (OAM) degree of freedom to test the practicability of the modified QSD protocol, as depicted in Fig. <ref>. We firstly utilize the type-2 spontaneous parametric downconversion (SPDC) process to set up a heralded single-photon source, which generates orthogonally polarized single photon pairs. After a dichroic mirror (DM) removing the pump light, a polarization beam splitter (PBS) is set to separate the photons, with horizontally polarized photons coupled into a single-mode fiber (SMF) for quantum prepare-and-measure scenarios, while vertically polarized photons serve as a coincidence trigger. After passing through the SMF, the horizontally polarized photons are manipulated by the first spatial light modulator (SLM1) displayed with computer-generated holograms (CGHs) to prepare required quantum states <cit.>. We use two 200 mm lenses L3 and L4 to constitute a 4f system, which helps image the plane of SLM1 to that of the second spatial light modulator (SLM2). SLM2 is displayed with CGHs to measure the quantum states and the output photons are coupled to the SMF by a two 300 mm lenses L5 and L6. Finally, we connect the two SMFs to two single-photon avalanche detectors (SPDs), whose outputs are subsequently fed into a coincidence circuit with a coincidence time window of 1 ns. Specifically, we choose l=-2,0,2 modes to construct a set of four MUBs in a qutrit system, and it is worth noting that coherent light cannot be used to replace the single-photon source, because it cannot exactly generate the required single party qutrit states in the QSD protocol. Next we explain how to simulate noisy MUB measurements using actual noise. We put two independent light sources after SLMs to ensure the noise is not affected by the mode generated. The light sources are composed of two variable-intensity LEDs and are set before two SPDs, respectively, to realize the independent adjustment of the noise proportions entering each single photon detector. This is done by changing the brightness of the LEDs to control the total number of photons scattered into the detectors. We test the fidelity of the eigenstates of four MUBs and the average of fidelity is F = 0.9873 ± 0.0004 (see S4 in Supplemental Material). Table <ref> displays our experimental results and Figure. <ref> displays a geometrical illustration for results in Table <ref>. Note that, the n-wise compatibility structure is an irregular geometry as shown in Fig. <ref>.(a), and in Figure. <ref> we use a sphere as a simplified model to represent the convex hull of compatibility structures. The results for symmetrical genuine 3-wise incompatibility seems the same but they differ when the accuracy comes to 0.0001. Our experimental results closely match the theoretical predictions, which demonstrates the feasibility of our method for testing genuine n-wise incompatibility in high-dimensional systems. Furthermore, our method provides an intuitive and direct tool for a deeper understanding of quantum incompatibility structures in high-dimensional multi-measurements scenarios. Conclusions.—In this work, we have shown that complex compatibility structures can be decomposed by simple ones and measurements violating different compatibility structures have limited robustness to noise. We thus modify the QSD protocol and construct a noise robustness hyperplane to test if given measurements respect some specific incompatibility structures. To show the feasibility and superiority of our method, we experimentally test genuine 3-wise and 4-wise incompatibility in a qutrit system without using entanglement, and capture a specific compatibility structure C_2^(1,2) with other compatibility structures ruled out. Overall, our experimental results show that our approach provides a direct and intuitive tool to detect quantum incompatibility structures in high-dimensional multi-measurement scenarios, which paves the way for future implementation and applications for quantum incompatibility resources in high-dimensional multi-measurement scenarios. Following our work, there remain some open questions. First, full compatibility is defined as a compatibility structure in which a set of measurements can all be performed simultaneously <cit.>. It would be interesting to explore the notion of compatibility structures where measurements are 2-wise compatible but not full compatible. Second, one can consider device-independent scenarios for multiple high-dimensional measurements and try to find a bound for genuine n-wise incompatibility <cit.>. This work was supported by the National Natural Science Foundation of China (Grant No. 12174301), the State Key Laboratory of Applied Optics and the Natural Science Basic Research Program of Shaanxi (Program No. 2023-JC-JQ-01).
http://arxiv.org/abs/2306.07947v1
20230611165023
An alternative proof of $\widehat{\mathfrak{sl}}_2'$ standard module semi-infinite structure
[ "Timur Kenzhaev" ]
math.RT
[ "math.RT", "math-ph", "math.MP" ]
acm same calc, matrix, arrows,decorations.pathmorphing, positioning equationsection 1.6in -0.8in 1.6in -0.8in definition theoremTheorem[section] lemmaLemma[section] propProposition[section] definitionDefinition[section] remarkRemark[section] exampleExample[section] consequenceConsequence[section]
http://arxiv.org/abs/2306.09755v1
20230616104141
Study of the Bose-Einstein correlations of same-sign pions in proton-lead collisions
[ "LHCb collaboration", "R. Aaij", "A. S. W. Abdelmotteleb", "C. Abellan Beteta", "F. Abudinén", "T. Ackernley", "B. Adeva", "M. Adinolfi", "P. Adlarson", "H. Afsharnia", "C. Agapopoulou", "C. A. Aidala", "Z. Ajaltouni", "S. Akar", "K. Akiba", "P. Albicocco", "J. Albrecht", "F. Alessio", "M. Alexander", "A. Alfonso Albero", "Z. Aliouche", "P. Alvarez Cartelle", "R. Amalric", "S. Amato", "J. L. Amey", "Y. Amhis", "L. An", "L. Anderlini", "M. Andersson", "A. Andreianov", "M. Andreotti", "D. Andreou", "D. Ao", "F. Archilli", "A. Artamonov", "M. Artuso", "E. Aslanides", "M. Atzeni", "B. Audurier", "I. B Bachiller Perea", "S. Bachmann", "M. Bachmayer", "J. J. Back", "A. Bailly-reyre", "P. Baladron Rodriguez", "V. Balagura", "W. Baldini", "J. Baptista de Souza Leite", "M. Barbetti", "R. J. Barlow", "S. Barsuk", "W. Barter", "M. Bartolini", "F. Baryshnikov", "J. M. Basels", "G. Bassi", "B. Batsukh", "A. Battig", "A. Bay", "A. Beck", "M. Becker", "F. Bedeschi", "I. B. Bediaga", "A. Beiter", "S. Belin", "V. Bellee", "K. Belous", "I. Belov", "I. Belyaev", "G. Benane", "G. Bencivenni", "E. Ben-Haim", "A. Berezhnoy", "R. Bernet", "S. Bernet Andres", "D. Berninghoff", "H. C. Bernstein", "C. Bertella", "A. Bertolin", "C. Betancourt", "F. Betti", "Ia. Bezshyiko", "J. Bhom", "L. Bian", "M. S. Bieker", "N. V. Biesuz", "P. Billoir", "A. Biolchini", "M. Birch", "F. C. R. Bishop", "A. Bitadze", "A. Bizzeti", "M. P. Blago", "T. Blake", "F. Blanc", "J. E. Blank", "S. Blusk", "D. Bobulska", "V. B Bocharnikov", "J. A. Boelhauve", "O. Boente Garcia", "T. Boettcher", "A. Boldyrev", "C. S. Bolognani", "R. Bolzonella", "N. Bondar", "F. Borgato", "S. Borghi", "M. Borsato", "J. T. Borsuk", "S. A. Bouchiba", "T. J. V. Bowcock", "A. Boyer", "C. Bozzi", "M. J. Bradley", "S. Braun", "A. Brea Rodriguez", "N. Breer", "J. Brodzicka", "A. Brossa Gonzalo", "J. Brown", "D. Brundu", "A. Buonaura", "L. Buonincontri", "A. T. Burke", "C. Burr", "A. Bursche", "A. Butkevich", "J. S. Butter", "J. Buytaert", "W. Byczynski", "S. Cadeddu", "H. Cai", "R. Calabrese", "L. Calefice", "S. Cali", "M. Calvi", "M. Calvo Gomez", "P. Campana", "D. H. Campora Perez", "A. F. Campoverde Quezada", "S. Capelli", "L. Capriotti", "A. Carbone", "R. Cardinale", "A. Cardini", "P. Carniti", "L. Carus", "A. Casais Vidal", "R. Caspary", "G. Casse", "M. Cattaneo", "G. Cavallero", "V. Cavallini", "S. Celani", "J. Cerasoli", "D. Cervenkov", "A. J. Chadwick", "I. Chahrour", "M. G. Chapman", "M. Charles", "Ph. Charpentier", "C. A. Chavez Barajas", "M. Chefdeville", "C. Chen", "S. Chen", "A. Chernov", "S. Chernyshenko", "V. Chobanova", "S. Cholak", "M. Chrzaszcz", "A. Chubykin", "V. Chulikov", "P. Ciambrone", "M. F. Cicala", "X. Cid Vidal", "G. Ciezarek", "P. Cifra", "G. Ciullo", "P. E. L. Clarke", "M. Clemencic", "H. V. Cliff", "J. Closier", "J. L. Cobbledick", "V. Coco", "J. Cogan", "E. Cogneras", "L. Cojocariu", "P. Collins", "T. Colombo", "L. Congedo", "A. Contu", "N. Cooke", "I. Corredoira", "G. Corti", "B. Couturier", "D. C. Craik", "M. Cruz Torres", "R. Currie", "C. L. Da Silva", "S. Dadabaev", "L. Dai", "X. Dai", "E. Dall'Occo", "J. Dalseno", "C. D'Ambrosio", "J. Daniel", "A. Danilina", "P. d'Argent", "J. E. Davies", "A. Davis", "O. De Aguiar Francisco", "J. de Boer", "K. De Bruyn", "S. De Capua", "M. De Cian", "U. De Freitas Carneiro Da Graca", "E. De Lucia", "J. M. De Miranda", "L. De Paula", "M. De Serio", "D. De Simone", "P. De Simone", "F. De Vellis", "J. A. de Vries", "C. T. Dean", "F. Debernardis", "D. Decamp", "V. Dedu", "L. Del Buono", "B. Delaney", "H. -P. Dembinski", "V. Denysenko", "O. Deschamps", "F. Dettori", "B. Dey", "P. Di Nezza", "I. Diachkov", "S. Didenko", "L. Dieste Maronas", "S. Ding", "V. Dobishuk", "A. Dolmatov", "C. Dong", "A. M. Donohoe", "F. Dordei", "A. C. dos Reis", "L. Douglas", "A. G. Downes", "P. Duda", "M. W. Dudek", "L. Dufour", "V. Duk", "P. Durante", "M. M. Duras", "J. M. Durham", "D. Dutta", "A. Dziurda", "A. Dzyuba", "S. Easo", "U. Egede", "A. Egorychev", "V. Egorychev", "C. Eirea Orro", "S. Eisenhardt", "E. Ejopu", "S. Ek-In", "L. Eklund", "M. E Elashri", "J. Ellbracht", "S. Ely", "A. Ene", "E. Epple", "S. Escher", "J. Eschle", "S. Esen", "T. Evans", "F. Fabiano", "L. N. Falcao", "Y. Fan", "B. Fang", "L. Fantini", "M. Faria", "S. Farry", "D. Fazzini", "L. F Felkowski", "M. Feo", "M. Fernandez Gomez", "A. D. Fernez", "F. Ferrari", "L. Ferreira Lopes", "F. Ferreira Rodrigues", "S. Ferreres Sole", "M. Ferrillo", "M. Ferro-Luzzi", "S. Filippov", "R. A. Fini", "M. Fiorini", "M. Firlej", "K. M. Fischer", "D. S. Fitzgerald", "C. Fitzpatrick", "T. Fiutowski", "F. Fleuret", "M. Fontana", "F. Fontanelli", "R. Forty", "D. Foulds-Holt", "V. Franco Lima", "M. Franco Sevilla", "M. Frank", "E. Franzoso", "G. Frau", "C. Frei", "D. A. Friday", "L. Frontini", "J. Fu", "Q. Fuehring", "T. Fulghesu", "E. Gabriel", "G. Galati", "M. D. Galati", "A. Gallas Torreira", "D. Galli", "S. Gambetta", "M. Gandelman", "P. Gandini", "H. G Gao", "R. Gao", "Y. Gao", "Y. Gao", "M. Garau", "L. M. Garcia Martin", "P. Garcia Moreno", "J. García Pardiñas", "B. Garcia Plana", "F. A. Garcia Rosales", "L. Garrido", "C. Gaspar", "R. E. Geertsema", "D. Gerick", "L. L. Gerken", "E. Gersabeck", "M. Gersabeck", "T. Gershon", "L. Giambastiani", "V. Gibson", "H. K. Giemza", "A. L. Gilman", "M. Giovannetti", "A. Gioventù", "P. Gironella Gironell", "C. Giugliano", "M. A. Giza", "K. Gizdov", "E. L. Gkougkousis", "V. V. Gligorov", "C. Göbel", "E. Golobardes", "D. Golubkov", "A. Golutvin", "A. Gomes", "S. Gomez Fernandez", "F. Goncalves Abrantes", "M. Goncerz", "G. Gong", "I. V. Gorelov", "C. Gotti", "J. P. Grabowski", "T. Grammatico", "L. A. Granado Cardoso", "E. Graugés", "E. Graverini", "G. Graziani", "A. T. Grecu", "L. M. Greeven", "N. A. Grieser", "L. Grillo", "S. Gromov", "C. Gu", "M. Guarise", "M. Guittiere", "V. Guliaeva", "P. A. Günther", "A. K. Guseinov", "E. Gushchin", "Y. Guz", "T. Gys", "T. Hadavizadeh", "C. Hadjivasiliou", "G. Haefeli", "C. Haen", "J. Haimberger", "S. C. Haines", "T. Halewood-leagas", "M. M. Halvorsen", "P. M. Hamilton", "J. Hammerich", "Q. Han", "X. Han", "S. Hansmann-Menzemer", "L. Hao", "N. Harnew", "T. Harrison", "C. Hasse", "M. Hatch", "J. He", "K. Heijhoff", "F. H Hemmer", "C. Henderson", "R. D. L. Henderson", "A. M. Hennequin", "K. Hennessy", "L. Henry", "J. Herd", "J. Heuel", "A. Hicheur", "D. Hill", "M. Hilton", "S. E. Hollitt", "J. Horswill", "R. Hou", "Y. Hou", "J. Hu", "J. Hu", "W. Hu", "X. Hu", "W. Huang", "X. Huang", "W. Hulsbergen", "R. J. Hunter", "M. Hushchyn", "D. Hutchcroft", "P. Ibis", "M. Idzik", "D. Ilin", "P. Ilten", "A. Inglessi", "A. Iniukhin", "A. Ishteev", "K. Ivshin", "R. Jacobsson", "H. Jage", "S. J. Jaimes Elles", "S. Jakobsen", "E. Jans", "B. K. Jashal", "A. Jawahery", "V. Jevtic", "E. Jiang", "X. Jiang", "Y. Jiang", "M. John", "D. Johnson", "C. R. Jones", "T. P. Jones", "S. J Joshi", "B. Jost", "N. Jurik", "I. Juszczak", "S. Kandybei", "Y. Kang", "M. Karacson", "D. Karpenkov", "M. Karpov", "J. W. Kautz", "F. Keizer", "D. M. Keller", "M. Kenzie", "T. Ketel", "B. Khanji", "A. Kharisova", "S. Kholodenko", "G. Khreich", "T. Kirn", "V. S. Kirsebom", "O. Kitouni", "S. Klaver", "N. Kleijne", "K. Klimaszewski", "M. R. Kmiec", "S. Koliiev", "L. Kolk", "A. Kondybayeva", "A. Konoplyannikov", "P. Kopciewicz", "R. Kopecna", "P. Koppenburg", "M. Korolev", "I. Kostiuk", "O. Kot", "S. Kotriakhova", "A. Kozachuk", "P. Kravchenko", "L. Kravchuk", "M. Kreps", "S. Kretzschmar", "P. Krokovny", "W. Krupa", "W. Krzemien", "J. Kubat", "S. Kubis", "W. Kucewicz", "M. Kucharczyk", "V. Kudryavtsev", "E. K Kulikova", "A. Kupsc", "D. Lacarrere", "G. Lafferty", "A. Lai", "A. Lampis", "D. Lancierini", "C. Landesa Gomez", "J. J. Lane", "R. Lane", "C. Langenbruch", "J. Langer", "O. Lantwin", "T. Latham", "F. Lazzari", "C. Lazzeroni", "R. Le Gac", "S. H. Lee", "R. Lefèvre", "A. Leflat", "S. Legotin", "P. Lenisa", "O. Leroy", "T. Lesiak", "B. Leverington", "A. Li", "H. Li", "K. Li", "P. Li", "P. -R. Li", "S. Li", "T. Li", "T. Li", "Y. Li", "Z. Li", "X. Liang", "C. Lin", "T. Lin", "R. Lindner", "V. Lisovskyi", "R. Litvinov", "G. Liu", "H. Liu", "K. Liu", "Q. Liu", "S. Liu", "A. Lobo Salvia", "A. Loi", "R. Lollini", "J. Lomba Castro", "I. Longstaff", "J. H. Lopes", "A. Lopez Huertas", "S. López Soliño", "G. H. Lovell", "Y. Lu", "C. Lucarelli", "D. Lucchesi", "S. Luchuk", "M. Lucio Martinez", "V. Lukashenko", "Y. Luo", "A. Lupato", "E. Luppi", "K. Lynch", "X. -R. Lyu", "R. Ma", "S. Maccolini", "F. Machefert", "F. Maciuc", "I. Mackay", "V. Macko", "L. R. Madhan Mohan", "A. Maevskiy", "D. Maisuzenko", "M. W. Majewski", "J. J. Malczewski", "S. Malde", "B. Malecki", "A. Malinin", "T. Maltsev", "G. Manca", "G. Mancinelli", "C. Mancuso", "R. Manera Escalero", "D. Manuzzi", "C. A. Manzari", "D. Marangotto", "J. F. Marchand", "U. Marconi", "S. Mariani", "C. Marin Benito", "J. Marks", "A. M. Marshall", "P. J. Marshall", "G. Martelli", "G. Martellotti", "L. Martinazzoli", "M. Martinelli", "D. Martinez Santos", "F. Martinez Vidal", "A. Massafferri", "M. Materok", "R. Matev", "A. Mathad", "V. Matiunin", "C. Matteuzzi", "K. R. Mattioli", "A. Mauri", "E. Maurice", "J. Mauricio", "M. Mazurek", "M. McCann", "L. Mcconnell", "T. H. McGrath", "N. T. McHugh", "A. McNab", "R. McNulty", "B. Meadows", "G. Meier", "D. Melnychuk", "S. Meloni", "M. Merk", "A. Merli", "L. Meyer Garcia", "D. Miao", "H. Miao", "M. Mikhasenko", "D. A. Milanes", "M. Milovanovic", "M. -N. Minard", "A. Minotti", "E. Minucci", "T. Miralles", "S. E. Mitchell", "B. Mitreska", "D. S. Mitzel", "A. Modak", "A. Mödden", "R. A. Mohammed", "R. D. Moise", "S. Mokhnenko", "T. Mombächer", "M. Monk", "I. A. Monroy", "S. Monteil", "G. Morello", "M. J. Morello", "M. P. Morgenthaler", "J. Moron", "A. B. Morris", "A. G. Morris", "R. Mountain", "H. Mu", "E. Muhammad", "F. Muheim", "M. Mulder", "K. Müller", "D. Murray", "R. Murta", "P. Muzzetto", "P. Naik", "T. Nakada", "R. Nandakumar", "T. Nanut", "I. Nasteva", "M. Needham", "N. Neri", "S. Neubert", "N. Neufeld", "P. Neustroev", "R. Newcombe", "J. Nicolini", "D. Nicotra", "E. M. Niel", "S. Nieswand", "N. Nikitin", "N. S. Nolte", "C. Normand", "J. Novoa Fernandez", "G. N Nowak", "C. Nunez", "A. Oblakowska-Mucha", "V. Obraztsov", "T. Oeser", "S. Okamura", "R. Oldeman", "F. Oliva", "C. J. G. Onderwater", "R. H. O'Neil", "J. M. Otalora Goicochea", "T. Ovsiannikova", "P. Owen", "A. Oyanguren", "O. Ozcelik", "K. O. Padeken", "B. Pagare", "P. R. Pais", "T. Pajero", "A. Palano", "M. Palutan", "G. Panshin", "L. Paolucci", "A. Papanestis", "M. Pappagallo", "L. L. Pappalardo", "C. Pappenheimer", "W. Parker", "C. Parkes", "B. Passalacqua", "G. Passaleva", "A. Pastore", "M. Patel", "C. Patrignani", "C. J. Pawley", "A. Pellegrino", "M. Pepe Altarelli", "S. Perazzini", "D. Pereima", "A. Pereiro Castro", "P. Perret", "K. Petridis", "A. Petrolini", "S. Petrucci", "M. Petruzzo", "H. Pham", "A. Philippov", "R. Piandani", "L. Pica", "M. Piccini", "B. Pietrzyk", "G. Pietrzyk", "D. Pinci", "F. Pisani", "M. Pizzichemi", "V. Placinta", "J. Plews", "M. Plo Casasus", "F. Polci", "M. Poli Lener", "A. Poluektov", "N. Polukhina", "I. Polyakov", "E. Polycarpo", "S. Ponce", "D. Popov", "S. Poslavskii", "K. Prasanth", "L. Promberger", "C. Prouve", "V. Pugatch", "V. Puill", "G. Punzi", "H. R. Qi", "W. Qian", "N. Qin", "S. Qu", "R. Quagliani", "N. V. Raab", "B. Rachwal", "J. H. Rademacker", "R. Rajagopalan", "M. Rama", "M. Ramos Pernas", "M. S. Rangel", "F. Ratnikov", "G. Raven", "M. Rebollo De Miguel", "F. Redi", "J. Reich", "F. Reiss", "Z. Ren", "P. K. Resmi", "R. Ribatti", "A. M. Ricci", "S. Ricciardi", "K. Richardson", "M. Richardson-Slipper", "K. Rinnert", "P. Robbe", "G. Robertson", "E. Rodrigues", "E. Rodriguez Fernandez", "J. A. Rodriguez Lopez", "E. Rodriguez Rodriguez", "D. L. Rolf", "A. Rollings", "P. Roloff", "V. Romanovskiy", "M. Romero Lamas", "A. Romero Vidal", "M. Rotondo", "M. S. Rudolph", "T. Ruf", "R. A. Ruiz Fernandez", "J. Ruiz Vidal", "A. Ryzhikov", "J. Ryzka", "J. J. Saborido Silva", "N. Sagidova", "N. Sahoo", "B. Saitta", "M. Salomoni", "C. Sanchez Gras", "I. Sanderswood", "R. Santacesaria", "C. Santamarina Rios", "M. Santimaria", "L. Santoro", "E. Santovetti", "D. Saranin", "G. Sarpis", "M. Sarpis", "A. Sarti", "C. Satriano", "A. Satta", "M. Saur", "D. Savrina", "H. Sazak", "L. G. Scantlebury Smead", "A. Scarabotto", "S. Schael", "S. Scherl", "A. M. Schertz", "M. Schiller", "H. Schindler", "M. Schmelling", "B. Schmidt", "S. Schmitt", "O. Schneider", "A. Schopper", "M. Schubiger", "N. Schulte", "S. Schulte", "M. H. Schune", "R. Schwemmer", "G. Schwering", "B. Sciascia", "A. Sciuccati", "S. Sellam", "A. Semennikov", "M. Senghi Soares", "A. Sergi", "N. Serra", "L. Sestini", "A. Seuthe", "Y. Shang", "D. M. Shangase", "M. Shapkin", "I. Shchemerov", "L. Shchutska", "T. Shears", "L. Shekhtman", "Z. Shen", "S. Sheng", "V. Shevchenko", "B. Shi", "E. B. Shields", "Y. Shimizu", "E. Shmanin", "R. Shorkin", "J. D. Shupperd", "B. G. Siddi", "R. Silva Coutinho", "G. Simi", "S. Simone", "M. Singla", "N. Skidmore", "R. Skuza", "T. Skwarnicki", "M. W. Slater", "J. C. Smallwood", "J. G. Smeaton", "E. Smith", "K. Smith", "M. Smith", "A. Snoch", "L. Soares Lavra", "M. D. Sokoloff", "F. J. P. Soler", "A. Solomin", "A. Solovev", "I. Solovyev", "R. Song", "F. L. Souza De Almeida", "B. Souza De Paula", "E. Spadaro Norella", "E. Spedicato", "J. G. Speer", "E. Spiridenkov", "P. Spradlin", "V. Sriskaran", "F. Stagni", "M. Stahl", "S. Stahl", "S. Stanislaus", "E. N. Stein", "O. Steinkamp", "O. Stenyakin", "H. Stevens", "D. Strekalina", "Y. S Su", "F. Suljik", "J. Sun", "L. Sun", "Y. Sun", "P. N. Swallow", "K. Swientek", "A. Szabelski", "T. Szumlak", "M. Szymanski", "Y. Tan", "S. Taneja", "M. D. Tat", "A. Terentev", "F. Teubert", "E. Thomas", "D. J. D. Thompson", "H. Tilquin", "V. Tisserand", "S. T'Jampens", "M. Tobin", "L. Tomassetti", "G. Tonani", "X. Tong", "D. Torres Machado", "L. Toscano", "D. Y. Tou", "C. Trippl", "G. Tuci", "N. Tuning", "A. Ukleja", "D. J. Unverzagt", "A. Usachov", "A. Ustyuzhanin", "U. Uwer", "V. Vagnoni", "A. Valassi", "G. Valenti", "N. Valls Canudas", "M. Van Dijk", "H. Van Hecke", "E. van Herwijnen", "C. B. Van Hulse", "M. van Veghel", "R. Vazquez Gomez", "P. Vazquez Regueiro", "C. Vázquez Sierra", "S. Vecchi", "J. J. Velthuis", "M. Veltri", "A. Venkateswaran", "M. Vesterinen", "D. Vieira", "M. Vieites Diaz", "X. Vilasis-Cardona", "E. Vilella Figueras", "A. Villa", "P. Vincent", "F. C. Volle", "D. vom Bruch", "V. Vorobyev", "N. Voropaev", "K. Vos", "C. Vrahas", "J. Walsh", "E. J. Walton", "G. Wan", "C. Wang", "G. Wang", "J. Wang", "J. Wang", "J. Wang", "J. Wang", "M. Wang", "R. Wang", "X. Wang", "Y. Wang", "Z. Wang", "Z. Wang", "Z. Wang", "J. A. Ward", "N. K. Watson", "D. Websdale", "Y. Wei", "B. D. C. Westhenry", "D. J. White", "M. Whitehead", "A. R. Wiederhold", "D. Wiedner", "G. Wilkinson", "M. K. Wilkinson", "I. Williams", "M. Williams", "M. R. J. Williams", "R. Williams", "F. F. Wilson", "W. Wislicki", "M. Witek", "L. Witola", "C. P. Wong", "G. Wormser", "S. A. Wotton", "H. Wu", "J. Wu", "Y. Wu", "K. Wyllie", "Z. Xiang", "Y. Xie", "A. Xu", "J. Xu", "L. Xu", "L. Xu", "M. Xu", "Q. Xu", "Z. Xu", "Z. Xu", "Z. Xu", "D. Yang", "S. Yang", "X. Yang", "Y. Yang", "Z. Yang", "Z. Yang", "V. Yeroshenko", "H. Yeung", "H. Yin", "J. Yu", "X. Yuan", "E. Zaffaroni", "M. Zavertyaev", "M. Zdybal", "M. Zeng", "C. Zhang", "D. Zhang", "J. Zhang", "L. Zhang", "S. Zhang", "S. Zhang", "Y. Zhang", "Y. Zhang", "Y. Zhao", "A. Zharkova", "A. Zhelezov", "Y. Zheng", "T. Zhou", "X. Zhou", "Y. Zhou", "V. Zhovkovska", "X. Zhu", "X. Zhu", "Z. Zhu", "V. Zhukov", "J. Zhuo", "Q. Zou", "S. Zucchelli", "D. Zuliani", "G. Zunica" ]
hep-ex
[ "hep-ex" ]
roman EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) pdflatex CERN-EP-2023-096 LHCb-PAPER-2023-002 June 16, 2023 [Authors are listed at the end of this paper.] Correlations of same-sign charged particles are analysed using proton-lead collision data collected by the experiment at a nucleon-nucleon centre-of-mass energy of 5.02 TeV, corresponding to an integrated luminosity of 1.06 nb^-1. Bose-Einstein correlations are observed in the form of an enhancement of pair production for same-sign charged pions with a small four-momentum difference squared. The dependence of the correlation radius and the intercept parameter on the reconstructed charged-particle multiplicity is investigated. The measured correlation radii scale linearly with the cube root of the reconstructed charged-particle multiplicity, being compatible with predictions of hydrodynamic models on the collision system evolution. Submitted to JHEP  . . plain arabic § INTRODUCTION Multiparticle production in the process of hadronization has been investigated for six decades but its nature is still not fully understood. The Hanbury Brown-Twiss (HBT) intensity interferometry <cit.> is the main tool to study the space-time properties of the hadron emission volume. In the case of joint production of identical bosons the HBT interference effect results in Bose-Einstein Correlations (BEC), while in the case of fermions it is referred to as Fermi-Dirac Correlations (FDC). The correlations measure the scales that are referred to as lengths of homogeneity <cit.>, which correspond to a limited region of the particle-emitting source surface. Measurement of correlations of identical particles can provide insight into the evolution of the hadron source. In particular, small systems, such as those produced in proton-ion (pA) collisions, are interesting because their lifetimes are significantly shorter than those in heavy-ion (AA) collisions, thus providing a better probe of the early system dynamics and the initial geometry. Since the first observation of quantum interference effects in identically charged pions produced in proton-antiproton collisions <cit.>, such effects have been investigated by many different experiments, at the Berkeley Bevalac <cit.>, AGS <cit.>, ISR <cit.>, SPS <cit.>, LEP <cit.>, RHIC <cit.>,  <cit.> and others <cit.>. The sizes of the studied systems vary tremendously, from collisions at LEP to AuAu collisions at RHIC and PbPb beams at . At the alone, the HBT effect has been investigated in proton-proton () <cit.>, proton-lead () <cit.> and lead-lead (PbPb) <cit.> systems. In this paper, the first study of the BEC effect in and collisions in the forward rapidity region is presented. The detector has the potential to measure quantum interference effects in the forward region, and therefore to provide complementary results to those from experiments at the LHC in the central rapidity region. This allows to study the dependence of the quantum interference effects upon various observables, and it provides insight into the particle production process in the forward direction, useful for the development of theoretical models. § ANALYSIS METHOD The BEC or FDC effects are the result of the quantum statistics, caused by the symmetrization (antisymmetrization) of the wave function describing a system of bosons (fermions). Such correlations are examined by measuring a two-particle correlation function, defined as the ratio of the inclusive density distribution for two particles and the so-called reference density. The latter is a two-particle density distribution that approximates the distribution without the BEC or FDC effects. The present study is based on the assumption of static, spherically-symmetric sources that can be characterized by univariate distributions. This class of sources is commonly used in HBT analyses, since the measured correlation radii in this case can be interpreted as the effective size of the particle-emitting source at the kinetic freeze-out <cit.>. §.§ Correlation function In order to investigate the space-time evolution of the hadronization source, the correlation function is commonly studied using the Lorentz-invariant variable Q <cit.>, which is related to the difference in the four-momenta q_1 and q_2 of two indistinguishable particles of rest mass m, ≡√( - ( q_1 - q_2)^2 ) = √(M^2 - 4 m^2) . This gives a measure of the phase-space separation of the two-particle system of invariant mass M. A two-particle correlation function C_2 is constructed as the ratio of the Q distributions for signal and reference pairs = ( N^ ref/N^ sig) ( d N^ sig() / d/ d N^ ref() / d) , where N^sig and N^ref correspond to the number of signal and reference pairs, respectively, obtained from an integral of the relevant Q distributions. Signal pairs are formed from selected same-sign (SS) charged particles that originate from the same collision vertex where the Bose-Einstein correlations are expected. The reference pairs are pairs of pions which reproduce as closely as possible the kinematics and various effects present in the signal, except for the BEC effect. The correlation function is constructed as a ratio to cancel the effects related to the detection efficiency. There are several methods to obtain a reference sample. It can be constructed using experimental data, or with simulated events incorporating the detector interactions. In this study, a data-driven reference sample is constructed by collecting pairs of particles originating from different collision events (a so-called event-mixing method), where the BEC effect cannot be present. The reference pairs are selected in the similar way as the signal ones to ensure that the signal kinematic distributions are reproduced as closely as possible. Additional requirements are imposed to combine particles originating from different events with similar properties and to further improve the agreement between the signal and reference samples. Particles in the reference pairs are required to originate from primary vertices with a comparable multiplicity of tracks reconstructed in the vertex detector (VELO tracks). Event-mixing candidates for the current event are provided by creating a pool of selected particles from ten different events and splitting them into intervals of multiplicity and the coordinate of the PV along the beam direction z_PV. Then, for each signal pair in a PV with a given (multiplicity, z_PV) interval, a random particle is chosen from the relevant interval in the pool to create a reference pair with the particle in the signal pair. Candidate particles to create pairs are grouped into multiplicity bins of width of three. The distance between the two PVs associated with the two particles in the same reference pair must be smaller than 10 mm in the z direction to provide consistent detector acceptance effects for both particles. The two-particle correlation function associated with a general class of particle sources can be described by the so-called symmetric Lévy-stable distributions <cit.>. In the case of static, univariate sources, the Lévy-type correlation function is expressed as 2, = 1 + e^ - | |^α_ L , where R denotes the correlation radius, and α_L is a parameter that can take values in the range 0 < α_L < 2 and is referred to as a Lévy index of stability. Frequently, to enable comparison of the correlation parameters between experiments and between different collision systems, including a measurement by LHCb in collisions <cit.>, α_L is fixed to one, leading to the simplified expression: 2, = 1 + e^ - | | . This parametrization enables the measured correlation radius to be interpreted as the effective size of the particle-emitting source. §.§ Final state interactions and nonfemtoscopic effects Final state interactions (FSI) resulting from the strong and electromagnetic forces can affect the observed two-particle correlations. The effects of the strong interaction in the case of pions is relatively small <cit.> and is usually neglected in BEC studies. The most notable effect is Coulomb repulsion related to the same-sign electric charge of the studied particles, especially in the low-Q region. A general expression for the Coulomb interaction term for point-like sources <cit.>, K(Q), is equivalent to the so-called Gamov factor <cit.> K_ Gamov^(ζ) = 2 πζ/ e^ 2 πζ - 1 , K_ Gamov^(ζ) = 2 πζ/ 1 - e^ - 2 πζ , where ζ= αm / , α is the  constant and m is the particle's rest mass. For SS particles, a repulsive interaction leads to a decrease in the correlation function, which is most prominent for low Q-values. In the case of opposite-sign (OS) pairs, this effect is reversed and an enhancement is observed. The OS sample is useful to parametrize the background related to the cluster contribution, as explained later. In the present analysis a parametrization developed by the experiment <cit.>, valid for the Lévy-type sources with α_L equal to unity, is used to account for the final-state Coulomb interactions between the particles in the SS and OS pairs = K_ Gamov(Q) ( 1 + απ m /1.26 + ) , where corresponds to the effective size of the particle-emitting source and is provided in femtometres. The additional term with represents a correction to the Gamov factor that enables a more precise characterization of the Coulomb interaction for extended sources. The correlation function shape is distorted by the presence of various nonfemtoscopic[Femtoscopic effects are those observed at the Fermi scale.] effects. There is no strict, theory-motivated description of such contributions, and different strategies can be applied to take them into account in the analysis (see Sec. 5). Long-range correlations, being one of nonfemtoscopic effects related mostly to the energy-momentum conservation, are present in the full Q range, but are most prominent at the high-Q values (∼> 1 )[If not indicated otherwise natural units with c = 1 are used.], far from the BEC-signal region. Although different parametrizations can be employed, a simple term linear in Q is usually optimal to characterize this contribution and is therefore commonly used <cit.>. Cluster contribution <cit.> is another prominent component of the nonfemtoscopic background, related to the effects of particles emitted inside low-momentum mini-jets and multibody decays of resonances. It is difficult to correct for the long-range correlations, as these are present dominantly in the range Q < 0.5–1.0 GeV that overlaps with the BEC signal. Constructing a correlation function for OS pairs can be particularly useful in the background studies, since similar effects can be expected for both the SS and OS pairs. Special care must be taken when investigating OS pairs, due to structures related to two-body decays of resonances, arising in the correlation function. Nonfemtoscopic background effects in the present analysis (in particular the cluster contribution) are studied and parametrized using the OS correlation functions. A cluster subtraction (CS) method, which was developed by the CMS experiment <cit.>, is employed for this purpose. This technique represents a fully data-driven approach. In the CS method, the shape parameters of the chosen function for the background description (e.g. the width of a Gaussian distribution) are determined from the OS fits. The background shape parameters in the SS fits are fixed to the values determined by studying the OS pairs, with an additional scaling parameter that is introduced to account for the different amplitudes of the cluster contribution in the SS and OS correlation functions. §.§ Fitting method The correlation function, including electromagnetic effects and the nonfemoscopic background, is parameterized using the Bowler–Sinyukov formalism <cit.>, as = [ 1 - + ×( 1 + e^ - | | ) ] × , where  is a normalization factor and is a general term for the background contribution, as described later. The intercept parameter, λ, corresponds to the extrapolated value of the correlation function at Q = 0 GeV <cit.>. This observable can be interpreted within the core-halo model <cit.>, which assumes that the particle emission can take place in a central core or in an extended halo originating from long-lived resonance decays. Contents of the bins in both the signal and reference Q-variable histograms are Poisson-distributed, and hence a negative log-likelihood fit method is preferable for the BEC studies <cit.>. In this approach, the following expression is minimized in the fitting procedure: -2 ln L = 2 ∑_i{ A_i ln[ ( 1 + C_2i) A_i / C_2i( A_i + B_i + 2 ) ] + ( B_i + 2 ) ln[ ( 1 + C_2i) ( B_i + 2 ) / A_i + B_i + 2 ] } , where A_i and B_i are the bin contents of the signal and reference histograms and C_2i corresponds to the fitted value of the correlation function at the Q-bin centre. § DETECTOR AND DATASET The LHCb detector <cit.> is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the interaction region <cit.>, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes <cit.> placed downstream of the magnet. The tracking system provides a measurement of the momentum, , of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV. The minimum distance of a track to a primary collision vertex (PV), the impact parameter, is measured with a resolution of (15 + 29 / ) , where is the component of the momentum transverse to the beam, in . Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors <cit.>. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers <cit.>. The trigger <cit.> consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. In the present analysis, a dataset of minimum-bias triggered events collected in collisions recorded in 2013 at a nucleon-nucleon centre-of-mass energy = 5.02 is used, with 4 and 1.58 beam energies, respectively. Two collision modes were used in this data-taking period with the beam directions reversed, which permits the study of collisions both in the forward and backward rapidity regions. The recorded and samples correspond to integrated luminosities of 1.06 nb^-1 and 0.52 nb^-1, respectively. As only a fraction of ∼ 10^-5 of the collisions corresponds to multiple interactions, a dedicated selection requirement is applied to accept only the events with a single primary vertex. The data samples available in the current study after selection described in Sec. <ref> corresponds to ∼ 6.3 × 10^7 (5.7 × 10^7) events for the () collisions. Simulation samples corresponding to the 2013 data-taking conditions are produced using the  <cit.> generator, with a specific LHCb configuration <cit.>. Decays of hadronic particles are described by  <cit.>, in which final-state radiation is generated using  <cit.>. The interaction of the generated particles with the detector and its response are implemented using the toolkit <cit.>, as described in <cit.>. The BEC effect is not activated in the simulation. Each of the and simulated dataset contains ∼ 1.2 × 10^7 events after selection, with the number of interactions per bunch crossing fixed to unity. The simulation samples are used mainly for the event-selection optimization, while the background modelling is performed with a purely data-driven approach. § EVENT SELECTION The data samples are divided into bins of the multiplicity of VELO tracks assigned to a PV (N_ VELO), which is used as a proxy variable to describe the total charged-particle multiplicity produced at the PV. The division is optimized to obtain a high number of bins with enough entries to perform the measurement (see appendix <ref>). The chosen binning scheme is presented in Table <ref>, together with approximate fractions of the respective sample corresponding to the given bin. Event selections are first applied to single-pion candidates. All pion candidates must have reconstructed track segments in both the VELO detector and tracking stations downstream of the magnet, have no matching tracks in the muon stations, and be in the pseudorapidity range 2 < η < 5. Each track must have a good track-fit quality and . To suppress the contribution from secondary pions (those not associated to a PV), the impact parameter is required to be less than 0.4 mm. Furthermore, the PV is required to be located within -160 < < 60. The particle identification (PID) is based on the output of a neural network employing subdetector information that quantifies the probability ProbNN for a particle to be of a certain kind <cit.>. The simulated quantities are corrected using PID calibration samples in data <cit.>. Effects of PID correlations between particles are considered. It is important to ensure a sample with high purity, but a strict requirement on ProbNN() variable may also strongly affect the signal region of the correlation function, by suppressing low-momentum pions that contribute to the BEC effect. The nominal requirement of ProbNN() > 0.65 is imposed to make this analysis consistent with the previous analysis for collisions <cit.>. Varying this requirement from 0.5 to 0.8 shows no significant changes in the measured correlation function. Contamination from incorrectly reconstructed particles can influence the measured Bose-Einstein correlations effect. Cloned tracks, being multiple tracks reconstructed from hits that were deposited by a single charged particle, are especially detrimental as they are present mostly in the low-Q region ( < 1.0 ), where the BEC signal is expected, appearing as a pair of almost identical, seemingly correlated, particles. To control this effect, the slopes of the track are studied. The cloned tracks usually share a very similar trajectory, hence the differences in the relevant slopes in a particle pair (and ) tend to be small. A requirement is imposed to limit this contribution, if both || and || values are smaller than 0.3 × 10^-3, then the pair is discarded. After applying these requirements, the effect of the clone particles is found to be negligible in the region Q > 0.05 GeV. In order to further reduce the contamination from cloned tracks and fake tracks (which do not correspond to any particle trajectory, but are reconstructed from a number of unrelated hits), in the case where the tracks share all the same hits deposited in the VELO subdetector, only the track with the best χ^2/ndf is retained. The study of the correlations is limited to the Q range from 0.05 to 2.0 GeV. In the region with very low Q (< 0.05 GeV) the separation in the momentum between the particles is poor and the discrepancy between simulation and data grows as Q vanishes. Furthermore, investigations using simulation indicate that there is a significant fraction of pion pairs containing fake tracks and cloned tracks in the region Q < 0.05 GeV for all multiplicity bins. § FITTING CORRELATION FUNCTIONS Correlation functions for both the SS and OS pairs are constructed for Q values between 0.05-2.0 with a bin width of 0.005 . This particular choice enhances consistency with the study performed for collisions <cit.> and allows a direct comparison of the results of the two analyses. The correlation function for the SS pairs is studied by fitting the Q spectrum using Eq. <ref>. The effective radius in Eq. <ref> is set to 2 , based on the expected correlation radii. The description of nonfemtoscopic background effects is found using the correlation function for the OS pairs. The resulting contribution is then scaled and fixed in the final fits to the SS correlation functions, as explained in detail below. The presence of structures related to intermediate states, such as , , , , in the OS correlation functions degrades the quality of the fit from which the nonfemtoscopic background parameters are determined. Therefore, the affected regions are removed from the fit to the correlation function. The boundaries of the removed regions are optimized to provide a good quality of fits to the correlation function for OS pairs, and the choice of particular boundary values is accounted for in the study of the systematic uncertainties. It is worth noting that the impact of resonances is most prominent in the bins with low N_ VELO values. The observed structures quickly diminish with increasing multiplicity due to a prevailing contribution from pairs of unrelated particles. As it has been already mentioned, the OS correlation functions contain similar effects as the SS ones (apart from the BEC signal) and can be used to investigate the nonfemtoscopic background contribution. A satisfactory description of the data is found using a Gaussian parametrization for the cluster contribution <cit.> = ( 1 + ) ×[ 1 + /√(2 π)exp( -^2/2 ^2) ] , where the term linear in corresponds to the long-range correlations and the z parameter (fixed to unity in the OS fits) is a factor used for the background scaling between OS and SS pairs. The width and amplitude are multiplicity-dependent values that characterize the cluster contribution and are parametrized as <cit.> ( ) = + exp( -/) , ( ) = /()^ . The fits to the correlation functions for OS pairs are performed simultaneously in all multiplicity bins available in the given sample (separately for the and datasets). In this procedure, the parameters from Eqs. <ref> and <ref> are common for all bins, while the N and δ values (Eqs. <ref> and <ref> respectively) are left free for each correlation function. A negative log-likelihood function (see Eq. <ref>) is constructed for all the N_ VELO bins in the given dataset and minimized globally to obtain the best description of the data. The lower Q fit range for the pairs is limited with respect to the ones, due to a significant contribution of multibody resonance decays in the very low region <cit.>. The global fits are performed for > 0.25 . Also, as it is found that the best stability of the global fits is obtained with a fixed value of the parameter from Eq. <ref>, its value is set to 15 based on the fit results for the entire dataset obtained with this parameter left free. Results of the global fits to the for the and data are shown in Fig. <ref> and summarized in Table <ref>. The quality of the fits is evaluated through the normalized Baker-Cousin likelihood ratio corresponding to the final value of the function minimized in the fitting procedure (see Eq. <ref>), divided by the number of degrees of freedom in the fit (∼2 for both and ). It is worth noting that the fit quality in BEC studies is not expected to be perfect. Due to the ad hoc descriptions of the unknown nonfemtoscopic background contribution as well as the compromise between the fit quality and interpretability of the measured correlation parameters, the obtained χ^2/ndf values are often larger than unity. The cluster contribution is expected to be larger for the OS pairs than for the SS ones due to the charge conservation in processes contributing to the cluster formation. For this reason, the amplitude of the cluster contribution for the SS correlation functions is multiplied by the scaling factor z (see Eq. <ref>). To obtain a uniform background scaling across the N_ VELO, this value is parametrized using a theoretically motivated form based on the ratio of SS and OS pair combinatorics <cit.> () = a + b / 1 + a + b , where a and b are parameters that vary freely in the fit. Results of fits using Eq. <ref> to determine the parametrization of the background scaling between the correlation functions for OS and SS pairs give a = 0.044 ± 0.004 (0.075 ± 0.007) and b = 1.86 ± 0.12 (3.12 ± 0.27) for (). § SYSTEMATIC UNCERTAINTIES Several sources of the systematic uncertainties are studied. The values determined for each of the sources are summarized in Table <ref>, where each input is assessed by taking the difference between the refitted correlation parameters and the baseline results, excluding the sources that proved to be negligible. The general approach to determine the systematic uncertainty is to repeat the analysis procedure with appropriate modifications introduced to evaluate the contribution in question. Some N_ VELO bins display an outlying uncertainty of the correlation parameters, which is not representative of the other regions, nevertheless they are shown in the final results. The leading source of systematic uncertainty is due to the parametrization of nonfemtoscopic background in the correlation function. It contains the effect related to the removal of the structures induced by two-body resonance decays from the fits to the OS correlation function. The impact of the particular choice of those limits is investigated by repeating the analysis with the widths of the defined regions increased and decreased by 20%. This value was already optimized in similar analyses performed by other experiments (Ref. <cit.>). Another effect in the determination of the cluster contribution is related to the choice of the range of the correlation function fits to the OS data, which is studied by varying its values within 10%, leading to the similar range variation as for the SS fit. The impact of the N_0 value on the final correlation parameters is investigated by varying this value within ∼30%, i.e. from 10 to 20. The chosen value represents a conservative approach, as the systematic uncertainty related to N_0 value is minor with respect to other sources. The scaling of the cluster contribution amplitude between the OS and SS pairs is found to be the dominant contribution to the systematic uncertainty, reaching up to 9% (11%) for the R (λ) parameter. This was investigated by shifting the nominal parametrizations of the background scaling (see Eq. <ref>) determined for the central results by ±0.15 before using them in the final SS fits to investigate the influence of this procedure on the measured correlation parameters. Those values are chosen to comprise most of the individual results with the z parameter left free in fits to the SS correlation function. Systematic uncertainties related to the selection criteria involve the contribution related to the pion identification, which is based on the ProbNN() variable, by changing the requirement to increase the misidentified pions in the sample by ∼50% with respect to the final selection. Another contribution is related to the misreconstructed tracks, which may degrade the purity of the selected pion sample and affect the final results. The misreconstructed tracks (mostly the clone ones) that could directly contribute to the SS pairs in the BEC-signal region are well controlled in the data (see Sec. <ref>), so no uncertainty is assigned to the clone tracks. A dedicated study is performed to evaluate the impact of additional fake tracks in the sample by modifying the selection requirement on the probability for a particle to be a fake track from 0.25 to 0.50 (which corresponds to the maximum value available in the dataset after the preselection). The fractions of fake tracks in the selected pion sample and of signal pairs containing a fake track (values determined using the simulation) for those two criteria are ≤1% (see Table <ref>). The contributions of the fake tracks and the pion selection criteria optimization to the systematic uncertainty are calculated as the absolute difference between the results obtained with the modified selection requirements and the central ones. In a limited number of bins, those inputs constitute the most important contributions to the total systematic uncertainty, together with the one related to the background scaling. Final-state Coulomb interactions for both the SS and OS pairs are taken into account in the fits to correlation functions. A simple proportionality is assumed <cit.> and the ϵ values are varied between 0.5 and 2.0. This leads to the final values of corresponding to 0.5 fm and 8.0 fm, which are used to evaluate the systematic uncertainty related to the correction for Coulomb interactions. The range and binning used in the fit to the correlation functions for SS pairs can affect the final results. The impact from the boundary in the low-Q region is evaluated by altering it within 20%, which corresponds to the values of 0.04 and 0.06 GeV. A similar procedure is implemented for the fit boundary at high-Q values, where the modification at the level of 10% is applied, leading to the upper fit range being limited to 1.8 and extended to 2.2 GeV. The smaller relative variation in the case of the upper fit range is motivated to stay within the range where the used parametrization describes correctly the effect related to the long-range correlations. Both contributions to the systematic uncertainty associated with the SS fit range are found to be relatively small. The systematic uncertainty related to the binning of the Q variable in the correlation function is determined by doubling the bin width from the nominal 0.005 GeV to 0.010 GeV. The impact of this modification on the measured correlation parameters is negligible. The construction of the reference sample is one of the basic aspects of the BEC analyses. The potential impact of the event-mixing implementation on the correlation parameters is assessed by varying the number of candidates available for the mixing, which is a parameter that can be tuned in the procedure. The nominal value of the number of candidates equal to 10 is changed to 50 and 100, and the analysis is repeated using the updated settings. The final contribution from the event-mixing to the systematic uncertainty is found to be small (see Table <ref>). § RESULTS The correlation parameters are determined by performing fits to the SS correlation functions in each individual N_ VELO bin using Eq. <ref>. In this procedure, the parameters characterising the cluster contribution and the background scaling are fixed to the values measured in the previous steps of the analysis. The fits are performed in the full range of Q variable (0.05-2.00 GeV) in the constructed correlation functions. Example results of the final fits to the SS correlation functions are presented in Fig. <ref>. Correlation parameters determined from fits to the using Eq. <ref> in the bins for the and datasets are presented in Table <ref>. The fit quality in BEC studies is not expected to be ideal due to various assumptions in the signal parametrization and the unknown theoretical parameterization of the nonfemtoscopic background effects. The results are complementary to the measurements performed at LHC energies in central rapidity regions <cit.>. The measured behaviour of the correlation parameters is compatible with observations from other experiments at LHC. In general, the correlation radius becomes larger with increasing event multiplicity, while the intercept parameter displays the opposite behaviour. The determined R (λ) parameters vary within 1-4 fm (∼0.40-0.85) depending on the N_ VELO interval. Correlation parameters determined in the BEC studies for the  <cit.>, and collisions at LHCb are illustrated in Figs. <ref> and <ref>. As it is observed in Fig. <ref> the measured correlation radii scale linearly with the cube root of the reconstructed charged-particle multiplicity. A simple fit illustrating this relationship is performed for different datasets (, and ). Only the statistical uncertainties of the measured R values are taken into account in this fit. Similar scaling was also reported by other experiments at LHC for various collision systems <cit.>. It is a tendency compatible with predictions of hydrodynamic models on the system evolution <cit.>. Although the results in both and samples agree well within the systematic uncertainties, it may be observed that the central R values in the sample tend to be systematically higher than in the case, becoming more prominent with increasing multiplicity. § CONCLUSIONS Bose-Einstein correlations in pairs of same-sign charged pions in and collisions at LHCb are measured using a data-driven analysis method to account for effects related to the nonfemtoscopic background. The correlation parameters are determined in common intervals of VELO-track multiplicity. This measurement is the first of this type performed in the forward rapidity region at LHC energies. The correlation radius (the intercept parameter) increases (decreases) with the charged-particle multiplicity. This trend is consistent with observations in the central rapidity region by other experiments at the LHC <cit.>. The measured correlation radii scale linearly with the cube root of the charged-particle multiplicity, which is compatible with predictions based on hydrodynamic models <cit.>. § APPENDIX. DISTRIBUTIONS OF THE VELO-TRACK MULTIPLICITY The data samples are divided into bins of the , which is used as a proxy for the total . The division is optimized to obtain a high number of bins with enough entries to perform the measurement. This procedure is based on the distribution for the signal pairs, which is shown in Fig. <ref> and allows selecting bins with similar signal yields for the final analysis. § ACKNOWLEDGEMENTS We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MICINN (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), Polish WLCG (Poland) and NERSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from ARC and ARDC (Australia); Minciencias (Colombia); AvH Foundation (Germany); EPLANET, Marie Skłodowska-Curie Actions, ERC and NextGenerationEU (European Union); A*MIDEX, ANR, IPhU and Labex P2IO, and Région Auvergne-Rhône-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, CAS CCEPP, Fundamental Research Funds for the Central Universities, and Sci. & Tech. Program of Guangzhou (China); GVA, XuntaGal, GENCAT, Inditex, InTalent and Prog. Atracción Talento, CM (Spain); SRC (Sweden); the Leverhulme Trust, the Royal Society and UKRI (United Kingdom). tocsectionReferences LHCb LHCb collaboration R. Aaij^320000-0003-0533-1952, A.S.W. Abdelmotteleb^510000-0001-7905-0542, C. Abellan Beteta^45, F. Abudinén^510000-0002-6737-3528, T. Ackernley^550000-0002-5951-3498, B. Adeva^410000-0001-9756-3712, M. Adinolfi^490000-0002-1326-1264, P. Adlarson^770000-0001-6280-3851, H. Afsharnia^9, C. Agapopoulou^430000-0002-2368-0147, C.A. Aidala^780000-0001-9540-4988, Z. Ajaltouni^9, S. Akar^600000-0003-0288-9694, K. Akiba^320000-0002-6736-471X, P. Albicocco^230000-0001-6430-1038, J. Albrecht^150000-0001-8636-1621, F. Alessio^430000-0001-5317-1098, M. Alexander^540000-0002-8148-2392, A. Alfonso Albero^400000-0001-6025-0675, Z. Aliouche^570000-0003-0897-4160, P. Alvarez Cartelle^500000-0003-1652-2834, R. Amalric^130000-0003-4595-2729, S. Amato^20000-0002-3277-0662, J.L. Amey^490000-0002-2597-3808, Y. Amhis^11,430000-0003-4282-1512, L. An^50000-0002-3274-5627, L. Anderlini^220000-0001-6808-2418, M. Andersson^450000-0003-3594-9163, A. Andreianov^380000-0002-6273-0506, M. Andreotti^210000-0003-2918-1311, D. Andreou^630000-0001-6288-0558, D. Ao^60000-0003-1647-4238, F. Archilli^31,t0000-0002-1779-6813, A. Artamonov^380000-0002-2785-2233, M. Artuso^630000-0002-5991-7273, E. Aslanides^100000-0003-3286-683X, M. Atzeni^450000-0002-3208-3336, B. Audurier^120000-0001-9090-4254, I.B Bachiller Perea^80000-0002-3721-4876, S. Bachmann^170000-0002-1186-3894, M. Bachmayer^440000-0001-5996-2747, J.J. Back^510000-0001-7791-4490, A. Bailly-reyre^13, P. Baladron Rodriguez^410000-0003-4240-2094, V. Balagura^120000-0002-1611-7188, W. Baldini^21,430000-0001-7658-8777, J. Baptista de Souza Leite^10000-0002-4442-5372, M. Barbetti^22,j0000-0002-6704-6914, R.J. Barlow^570000-0002-8295-8612, S. Barsuk^110000-0002-0898-6551, W. Barter^530000-0002-9264-4799, M. Bartolini^500000-0002-8479-5802, F. Baryshnikov^380000-0002-6418-6428, J.M. Basels^140000-0001-5860-8770, G. Bassi^29,q0000-0002-2145-3805, B. Batsukh^40000-0003-1020-2549, A. Battig^150009-0001-6252-960X, A. Bay^440000-0002-4862-9399, A. Beck^510000-0003-4872-1213, M. Becker^150000-0002-7972-8760, F. Bedeschi^290000-0002-8315-2119, I.B. Bediaga^10000-0001-7806-5283, A. Beiter^63, S. Belin^410000-0001-7154-1304, V. Bellee^450000-0001-5314-0953, K. Belous^380000-0003-0014-2589, I. Belov^380000-0003-1699-9202, I. Belyaev^380000-0002-7458-7030, G. Benane^100000-0002-8176-8315, G. Bencivenni^230000-0002-5107-0610, E. Ben-Haim^130000-0002-9510-8414, A. Berezhnoy^380000-0002-4431-7582, R. Bernet^450000-0002-4856-8063, S. Bernet Andres^390000-0002-4515-7541, D. Berninghoff^17, H.C. Bernstein^63, C. Bertella^570000-0002-3160-147X, A. Bertolin^280000-0003-1393-4315, C. Betancourt^450000-0001-9886-7427, F. Betti^430000-0002-2395-235X, Ia. Bezshyiko^450000-0002-4315-6414, J. Bhom^350000-0002-9709-903X, L. Bian^690000-0001-5209-5097, M.S. Bieker^150000-0001-7113-7862, N.V. Biesuz^210000-0003-3004-0946, P. Billoir^130000-0001-5433-9876, A. Biolchini^320000-0001-6064-9993, M. Birch^560000-0001-9157-4461, F.C.R. Bishop^500000-0002-0023-3897, A. Bitadze^570000-0001-7979-1092, A. Bizzeti^0000-0001-5729-5530, M.P. Blago^500000-0001-7542-2388, T. Blake^510000-0002-0259-5891, F. Blanc^440000-0001-5775-3132, J.E. Blank^150000-0002-6546-5605, S. Blusk^630000-0001-9170-684X, D. Bobulska^540000-0002-3003-9980, V.B Bocharnikov^380000-0003-1048-7732, J.A. Boelhauve^150000-0002-3543-9959, O. Boente Garcia^120000-0003-0261-8085, T. Boettcher^600000-0002-2439-9955, A. Boldyrev^380000-0002-7872-6819, C.S. Bolognani^750000-0003-3752-6789, R. Bolzonella^21,i0000-0002-0055-0577, N. Bondar^380000-0003-2714-9879, F. Borgato^280000-0002-3149-6710, S. Borghi^570000-0001-5135-1511, M. Borsato^170000-0001-5760-2924, J.T. Borsuk^350000-0002-9065-9030, S.A. Bouchiba^440000-0002-0044-6470, T.J.V. Bowcock^550000-0002-3505-6915, A. Boyer^430000-0002-9909-0186, C. Bozzi^210000-0001-6782-3982, M.J. Bradley^56, S. Braun^610000-0002-4489-1314, A. Brea Rodriguez^410000-0001-5650-445X, N. Breer^150000-0003-0307-3662, J. Brodzicka^350000-0002-8556-0597, A. Brossa Gonzalo^410000-0002-4442-1048, J. Brown^550000-0001-9846-9672, D. Brundu^270000-0003-4457-5896, A. Buonaura^450000-0003-4907-6463, L. Buonincontri^280000-0002-1480-454X, A.T. Burke^570000-0003-0243-0517, C. Burr^430000-0002-5155-1094, A. Bursche^67, A. Butkevich^380000-0001-9542-1411, J.S. Butter^320000-0002-1816-536X, J. Buytaert^430000-0002-7958-6790, W. Byczynski^430009-0008-0187-3395, S. Cadeddu^270000-0002-7763-500X, H. Cai^69, R. Calabrese^21,i0000-0002-1354-5400, L. Calefice^150000-0001-6401-1583, S. Cali^230000-0001-9056-0711, M. Calvi^26,m0000-0002-8797-1357, M. Calvo Gomez^390000-0001-5588-1448, P. Campana^230000-0001-8233-1951, D.H. Campora Perez^750000-0001-8998-9975, A.F. Campoverde Quezada^60000-0003-1968-1216, S. Capelli^26,m0000-0002-8444-4498, L. Capriotti^210000-0003-4899-0587, A. Carbone^20,g0000-0002-7045-2243, R. Cardinale^24,k0000-0002-7835-7638, A. Cardini^270000-0002-6649-0298, P. Carniti^26,m0000-0002-7820-2732, L. Carus^14, A. Casais Vidal^410000-0003-0469-2588, R. Caspary^170000-0002-1449-1619, G. Casse^550000-0002-8516-237X, M. Cattaneo^430000-0001-7707-169X, G. Cavallero^210000-0002-8342-7047, V. Cavallini^21,i0000-0001-7601-129X, S. Celani^440000-0003-4715-7622, J. Cerasoli^100000-0001-9777-881X, D. Cervenkov^580000-0002-1865-741X, A.J. Chadwick^550000-0003-3537-9404, I. Chahrour^780000-0002-1472-0987, M.G. Chapman^49, M. Charles^130000-0003-4795-498X, Ph. Charpentier^430000-0001-9295-8635, C.A. Chavez Barajas^550000-0002-4602-8661, M. Chefdeville^80000-0002-6553-6493, C. Chen^100000-0002-3400-5489, S. Chen^40000-0002-8647-1828, A. Chernov^350000-0003-0232-6808, S. Chernyshenko^470000-0002-2546-6080, V. Chobanova^41,w0000-0002-1353-6002, S. Cholak^440000-0001-8091-4766, M. Chrzaszcz^350000-0001-7901-8710, A. Chubykin^380000-0003-1061-9643, V. Chulikov^380000-0002-7767-9117, P. Ciambrone^230000-0003-0253-9846, M.F. Cicala^510000-0003-0678-5809, X. Cid Vidal^410000-0002-0468-541X, G. Ciezarek^430000-0003-1002-8368, P. Cifra^430000-0003-3068-7029, G. Ciullo^i,210000-0001-8297-2206, P.E.L. Clarke^530000-0003-3746-0732, M. Clemencic^430000-0003-1710-6824, H.V. Cliff^500000-0003-0531-0916, J. Closier^430000-0002-0228-9130, J.L. Cobbledick^570000-0002-5146-9605, V. Coco^430000-0002-5310-6808, J. Cogan^100000-0001-7194-7566, E. Cogneras^90000-0002-8933-9427, L. Cojocariu^370000-0002-1281-5923, P. Collins^430000-0003-1437-4022, T. Colombo^430000-0002-9617-9687, L. Congedo^190000-0003-4536-4644, A. Contu^270000-0002-3545-2969, N. Cooke^480000-0002-4179-3700, I. Corredoira ^410000-0002-6089-0899, G. Corti^430000-0003-2857-4471, B. Couturier^430000-0001-6749-1033, D.C. Craik^450000-0002-3684-1560, M. Cruz Torres^1,e0000-0003-2607-131X, R. Currie^530000-0002-0166-9529, C.L. Da Silva^620000-0003-4106-8258, S. Dadabaev^380000-0002-0093-3244, L. Dai^660000-0002-4070-4729, X. Dai^50000-0003-3395-7151, E. Dall'Occo^150000-0001-9313-4021, J. Dalseno^410000-0003-3288-4683, C. D'Ambrosio^430000-0003-4344-9994, J. Daniel^90000-0002-9022-4264, A. Danilina^380000-0003-3121-2164, P. d'Argent^190000-0003-2380-8355, J.E. Davies^570000-0002-5382-8683, A. Davis^570000-0001-9458-5115, O. De Aguiar Francisco^570000-0003-2735-678X, J. de Boer^430000-0002-6084-4294, K. De Bruyn^740000-0002-0615-4399, S. De Capua^570000-0002-6285-9596, M. De Cian^440000-0002-1268-9621, U. De Freitas Carneiro Da Graca^10000-0003-0451-4028, E. De Lucia^230000-0003-0793-0844, J.M. De Miranda^10009-0003-2505-7337, L. De Paula^20000-0002-4984-7734, M. De Serio^19,f0000-0003-4915-7933, D. De Simone^450000-0001-8180-4366, P. De Simone^230000-0001-9392-2079, F. De Vellis^150000-0001-7596-5091, J.A. de Vries^750000-0003-4712-9816, C.T. Dean^620000-0002-6002-5870, F. Debernardis^19,f0009-0001-5383-4899, D. Decamp^80000-0001-9643-6762, V. Dedu^100000-0001-5672-8672, L. Del Buono^130000-0003-4774-2194, B. Delaney^590009-0007-6371-8035, H.-P. Dembinski^150000-0003-3337-3850, V. Denysenko^450000-0002-0455-5404, O. Deschamps^90000-0002-7047-6042, F. Dettori^27,h0000-0003-0256-8663, B. Dey^720000-0002-4563-5806, P. Di Nezza^230000-0003-4894-6762, I. Diachkov^380000-0001-5222-5293, S. Didenko^380000-0001-5671-5863, L. Dieste Maronas^41, S. Ding^630000-0002-5946-581X, V. Dobishuk^470000-0001-9004-3255, A. Dolmatov^38, C. Dong^30000-0003-3259-6323, A.M. Donohoe^180000-0002-4438-3950, F. Dordei^270000-0002-2571-5067, A.C. dos Reis^10000-0001-7517-8418, L. Douglas^54, A.G. Downes^80000-0003-0217-762X, P. Duda^760000-0003-4043-7963, M.W. Dudek^350000-0003-3939-3262, L. Dufour^430000-0002-3924-2774, V. Duk^730000-0001-6440-0087, P. Durante^430000-0002-1204-2270, M. M. Duras^760000-0002-4153-5293, J.M. Durham^620000-0002-5831-3398, D. Dutta^570000-0002-1191-3978, A. Dziurda^350000-0003-4338-7156, A. Dzyuba^380000-0003-3612-3195, S. Easo^520000-0002-4027-7333, U. Egede^640000-0001-5493-0762, A. Egorychev^380000-0001-5555-8982, V. Egorychev^380000-0002-2539-673X, C. Eirea Orro^41, S. Eisenhardt^530000-0002-4860-6779, E. Ejopu^570000-0003-3711-7547, S. Ek-In^440000-0002-2232-6760, L. Eklund^770000-0002-2014-3864, M.E Elashri^600000-0001-9398-953X, J. Ellbracht^150000-0003-1231-6347, S. Ely^560000-0003-1618-3617, A. Ene^370000-0001-5513-0927, E. Epple^600000-0002-6312-3740, S. Escher^140009-0007-2540-4203, J. Eschle^450000-0002-7312-3699, S. Esen^450000-0003-2437-8078, T. Evans^570000-0003-3016-1879, F. Fabiano^27,h0000-0001-6915-9923, L.N. Falcao^10000-0003-3441-583X, Y. Fan^60000-0002-3153-430X, B. Fang^11,690000-0003-0030-3813, L. Fantini^73,p0000-0002-2351-3998, M. Faria^440000-0002-4675-4209, S. Farry^550000-0001-5119-9740, D. Fazzini^26,m0000-0002-5938-4286, L.F Felkowski^760000-0002-0196-910X, M. Feo^430000-0001-5266-2442, M. Fernandez Gomez^410000-0003-1984-4759, A.D. Fernez^610000-0001-9900-6514, F. Ferrari^200000-0002-3721-4585, L. Ferreira Lopes^440009-0003-5290-823X, F. Ferreira Rodrigues^20000-0002-4274-5583, S. Ferreres Sole^320000-0003-3571-7741, M. Ferrillo^450000-0003-1052-2198, M. Ferro-Luzzi^430009-0008-1868-2165, S. Filippov^380000-0003-3900-3914, R.A. Fini^190000-0002-3821-3998, M. Fiorini^21,i0000-0001-6559-2084, M. Firlej^340000-0002-1084-0084, K.M. Fischer^580009-0000-8700-9910, D.S. Fitzgerald^780000-0001-6862-6876, C. Fitzpatrick^570000-0003-3674-0812, T. Fiutowski^340000-0003-2342-8854, F. Fleuret^120000-0002-2430-782X, M. Fontana^200000-0003-4727-831X, F. Fontanelli^24,k0000-0001-7029-7178, R. Forty^430000-0003-2103-7577, D. Foulds-Holt^500000-0001-9921-687X, V. Franco Lima^550000-0002-3761-209X, M. Franco Sevilla^610000-0002-5250-2948, M. Frank^430000-0002-4625-559X, E. Franzoso^21,i0000-0003-2130-1593, G. Frau^170000-0003-3160-482X, C. Frei^430000-0001-5501-5611, D.A. Friday^570000-0001-9400-3322, L.F Frontini^25,l0000-0002-1137-8629, J. Fu^60000-0003-3177-2700, Q. Fuehring^150000-0003-3179-2525, T. Fulghesu^130000-0001-9391-8619, E. Gabriel^320000-0001-8300-5939, G. Galati^19,f0000-0001-7348-3312, M.D. Galati^320000-0002-8716-4440, A. Gallas Torreira^410000-0002-2745-7954, D. Galli^20,g0000-0003-2375-6030, S. Gambetta^53,430000-0003-2420-0501, M. Gandelman^20000-0001-8192-8377, P. Gandini^250000-0001-7267-6008, H.G Gao^60000-0002-6025-6193, R. Gao^580009-0004-1782-7642, Y. Gao^70000-0002-6069-8995, Y. Gao^50000-0003-1484-0943, M. Garau^27,h0000-0002-0505-9584, L.M. Garcia Martin^510000-0003-0714-8991, P. Garcia Moreno^400000-0002-3612-1651, J. García Pardiñas^430000-0003-2316-8829, B. Garcia Plana^41, F.A. Garcia Rosales^120000-0003-4395-0244, L. Garrido^400000-0001-8883-6539, C. Gaspar^430000-0002-8009-1509, R.E. Geertsema^320000-0001-6829-7777, D. Gerick^17, L.L. Gerken^150000-0002-6769-3679, E. Gersabeck^570000-0002-2860-6528, M. Gersabeck^570000-0002-0075-8669, T. Gershon^510000-0002-3183-5065, L. Giambastiani^280000-0002-5170-0635, V. Gibson^500000-0002-6661-1192, H.K. Giemza^360000-0003-2597-8796, A.L. Gilman^580000-0001-5934-7541, M. Giovannetti^230000-0003-2135-9568, A. Gioventù^410000-0001-5399-326X, P. Gironella Gironell^400000-0001-5603-4750, C. Giugliano^21,i0000-0002-6159-4557, M.A. Giza^350000-0002-0805-1561, K. Gizdov^530000-0002-3543-7451, E.L. Gkougkousis^430000-0002-2132-2071, V.V. Gligorov^130000-0002-8189-8267, C. Göbel^650000-0003-0523-495X, E. Golobardes^390000-0001-8080-0769, D. Golubkov^380000-0001-6216-1596, A. Golutvin^56,380000-0003-2500-8247, A. Gomes^1,a0009-0005-2892-2968, S. Gomez Fernandez^400000-0002-3064-9834, F. Goncalves Abrantes^580000-0002-7318-482X, M. Goncerz^350000-0002-9224-914X, G. Gong^30000-0002-7822-3947, I.V. Gorelov^380000-0001-5570-0133, C. Gotti^260000-0003-2501-9608, J.P. Grabowski^710000-0001-8461-8382, T. Grammatico^130000-0002-2818-9744, L.A. Granado Cardoso^430000-0003-2868-2173, E. Graugés^400000-0001-6571-4096, E. Graverini^440000-0003-4647-6429, G. Graziani^0000-0001-8212-846X, A. T. Grecu^370000-0002-7770-1839, L.M. Greeven^320000-0001-5813-7972, N.A. Grieser^600000-0003-0386-4923, L. Grillo^540000-0001-5360-0091, S. Gromov^380000-0002-8967-3644, C.  Gu^30000-0001-5635-6063, M. Guarise^21,i0000-0001-8829-9681, M. Guittiere^110000-0002-2916-7184, V. Guliaeva^380000-0003-3676-5040, P. A. Günther^170000-0002-4057-4274, A.K. Guseinov^380000-0002-5115-0581, E. Gushchin^380000-0001-8857-1665, Y. Guz^5,38,430000-0001-7552-400X, T. Gys^430000-0002-6825-6497, T. Hadavizadeh^640000-0001-5730-8434, C. Hadjivasiliou^610000-0002-2234-0001, G. Haefeli^440000-0002-9257-839X, C. Haen^430000-0002-4947-2928, J. Haimberger^430000-0002-3363-7783, S.C. Haines^500000-0001-5906-391X, T. Halewood-leagas^550000-0001-9629-7029, M.M. Halvorsen^430000-0003-0959-3853, P.M. Hamilton^610000-0002-2231-1374, J. Hammerich^550000-0002-5556-1775, Q. Han^70000-0002-7958-2917, X. Han^170000-0001-7641-7505, S. Hansmann-Menzemer^170000-0002-3804-8734, L. Hao^60000-0001-8162-4277, N. Harnew^580000-0001-9616-6651, T. Harrison^550000-0002-1576-9205, C. Hasse^430000-0002-9658-8827, M. Hatch^430009-0004-4850-7465, J. He^6,c0000-0002-1465-0077, K. Heijhoff^320000-0001-5407-7466, F.H Hemmer^430000-0001-8177-0856, C. Henderson^600000-0002-6986-9404, R.D.L. Henderson^64,510000-0001-6445-4907, A.M. Hennequin^590009-0008-7974-3785, K. Hennessy^550000-0002-1529-8087, L. Henry^430000-0003-3605-832X, J. Herd^560000-0001-7828-3694, J. Heuel^140000-0001-9384-6926, A. Hicheur^20000-0002-3712-7318, D. Hill^440000-0003-2613-7315, M. Hilton^570000-0001-7703-7424, S.E. Hollitt^150000-0002-4962-3546, J. Horswill^570000-0002-9199-8616, R. Hou^70000-0002-3139-3332, Y. Hou^80000-0001-6454-278X, J. Hu^17, J. Hu^670000-0002-8227-4544, W. Hu^50000-0002-2855-0544, X. Hu^30000-0002-5924-2683, W. Huang^60000-0002-1407-1729, X. Huang^69, W. Hulsbergen^320000-0003-3018-5707, R.J. Hunter^510000-0001-7894-8799, M. Hushchyn^380000-0002-8894-6292, D. Hutchcroft^550000-0002-4174-6509, P. Ibis^150000-0002-2022-6862, M. Idzik^340000-0001-6349-0033, D. Ilin^380000-0001-8771-3115, P. Ilten^600000-0001-5534-1732, A. Inglessi^380000-0002-2522-6722, A. Iniukhin^380000-0002-1940-6276, A. Ishteev^380000-0003-1409-1428, K. Ivshin^380000-0001-8403-0706, R. Jacobsson^430000-0003-4971-7160, H. Jage^140000-0002-8096-3792, S.J. Jaimes Elles^420000-0003-0182-8638, S. Jakobsen^430000-0002-6564-040X, E. Jans^320000-0002-5438-9176, B.K. Jashal^420000-0002-0025-4663, A. Jawahery^610000-0003-3719-119X, V. Jevtic^150000-0001-6427-4746, E. Jiang^610000-0003-1728-8525, X. Jiang^4,60000-0001-8120-3296, Y. Jiang^60000-0002-8964-5109, M. John^580000-0002-8579-844X, D. Johnson^590000-0003-3272-6001, C.R. Jones^500000-0003-1699-8816, T.P. Jones^510000-0001-5706-7255, S.J Joshi^360000-0002-5821-1674, B. Jost^430009-0005-4053-1222, N. Jurik^430000-0002-6066-7232, I. Juszczak^350000-0002-1285-3911, S. Kandybei^460000-0003-3598-0427, Y. Kang^30000-0002-6528-8178, M. Karacson^430009-0006-1867-9674, D. Karpenkov^380000-0001-8686-2303, M. Karpov^380000-0003-4503-2682, J.W. Kautz^600000-0001-8482-5576, F. Keizer^430000-0002-1290-6737, D.M. Keller^630000-0002-2608-1270, M. Kenzie^510000-0001-7910-4109, T. Ketel^320000-0002-9652-1964, B. Khanji^630000-0003-3838-281X, A. Kharisova^380000-0002-5291-9583, S. Kholodenko^380000-0002-0260-6570, G. Khreich^110000-0002-6520-8203, T. Kirn^140000-0002-0253-8619, V.S. Kirsebom^440009-0005-4421-9025, O. Kitouni^590000-0001-9695-8165, S. Klaver^330000-0001-7909-1272, N. Kleijne^29,q0000-0003-0828-0943, K. Klimaszewski^360000-0003-0741-5922, M.R. Kmiec^360000-0002-1821-1848, S. Koliiev^470009-0002-3680-1224, L. Kolk^150000-0003-2589-5130, A. Kondybayeva^380000-0001-8727-6840, A. Konoplyannikov^380009-0005-2645-8364, P. Kopciewicz^340000-0001-9092-3527, R. Kopecna^17, P. Koppenburg^320000-0001-8614-7203, M. Korolev^380000-0002-7473-2031, I. Kostiuk^320000-0002-8767-7289, O. Kot^47, S. Kotriakhova^0000-0002-1495-0053, A. Kozachuk^380000-0001-6805-0395, P. Kravchenko^380000-0002-4036-2060, L. Kravchuk^380000-0001-8631-4200, M. Kreps^510000-0002-6133-486X, S. Kretzschmar^140009-0008-8631-9552, P. Krokovny^380000-0002-1236-4667, W. Krupa^340000-0002-7947-465X, W. Krzemien^360000-0002-9546-358X, J. Kubat^17, S. Kubis^760000-0001-8774-8270, W. Kucewicz^350000-0002-2073-711X, M. Kucharczyk^350000-0003-4688-0050, V. Kudryavtsev^380009-0000-2192-995X, E.K Kulikova^380009-0002-8059-5325, A. Kupsc^770000-0003-4937-2270, D. Lacarrere^430009-0005-6974-140X, G. Lafferty^570000-0003-0658-4919, A. Lai^270000-0003-1633-0496, A. Lampis^27,h0000-0002-5443-4870, D. Lancierini^450000-0003-1587-4555, C. Landesa Gomez^410000-0001-5241-8642, J.J. Lane^570000-0002-5816-9488, R. Lane^490000-0002-2360-2392, C. Langenbruch^140000-0002-3454-7261, J. Langer^150000-0002-0322-5550, O. Lantwin^380000-0003-2384-5973, T. Latham^510000-0002-7195-8537, F. Lazzari^29,r0000-0002-3151-3453, C. Lazzeroni^480000-0003-4074-4787, R. Le Gac^100000-0002-7551-6971, S.H. Lee^780000-0003-3523-9479, R. Lefèvre^90000-0002-6917-6210, A. Leflat^380000-0001-9619-6666, S. Legotin^380000-0003-3192-6175, P. Lenisa^i,210000-0003-3509-1240, O. Leroy^100000-0002-2589-240X, T. Lesiak^350000-0002-3966-2998, B. Leverington^170000-0001-6640-7274, A. Li^30000-0001-5012-6013, H. Li^670000-0002-2366-9554, K. Li^70000-0002-2243-8412, P. Li^430000-0003-2740-9765, P.-R. Li^680000-0002-1603-3646, S. Li^70000-0001-5455-3768, T. Li^40000-0002-5241-2555, T. Li^670000-0002-5723-0961, Y. Li^40000-0003-2043-4669, Z. Li^630000-0003-0755-8413, X. Liang^630000-0002-5277-9103, C. Lin^60000-0001-7587-3365, T. Lin^520000-0001-6052-8243, R. Lindner^430000-0002-5541-6500, V. Lisovskyi^150000-0003-4451-214X, R. Litvinov^27,h0000-0002-4234-435X, G. Liu^670000-0001-5961-6588, H. Liu^60000-0001-6658-1993, K. Liu^680000-0003-4529-3356, Q. Liu^60000-0003-4658-6361, S. Liu^4,60000-0002-6919-227X, A. Lobo Salvia^400000-0002-2375-9509, A. Loi^270000-0003-4176-1503, R. Lollini^730000-0003-3898-7464, J. Lomba Castro^410000-0003-1874-8407, I. Longstaff^54, J.H. Lopes^20000-0003-1168-9547, A. Lopez Huertas^400000-0002-6323-5582, S. López Soliño^410000-0001-9892-5113, G.H. Lovell^500000-0002-9433-054X, Y. Lu^4,b0000-0003-4416-6961, C. Lucarelli^22,j0000-0002-8196-1828, D. Lucchesi^28,o0000-0003-4937-7637, S. Luchuk^380000-0002-3697-8129, M. Lucio Martinez^750000-0001-6823-2607, V. Lukashenko^32,470000-0002-0630-5185, Y. Luo^30009-0001-8755-2937, A. Lupato^570000-0003-0312-3914, E. Luppi^21,i0000-0002-1072-5633, K. Lynch^180000-0002-7053-4951, X.-R. Lyu^60000-0001-5689-9578, R. Ma^60000-0002-0152-2412, S. Maccolini^150000-0002-9571-7535, F. Machefert^110000-0002-4644-5916, F. Maciuc^370000-0001-6651-9436, I. Mackay^580000-0003-0171-7890, V. Macko^440009-0003-8228-0404, L.R. Madhan Mohan^500000-0002-9390-8821, A. Maevskiy^380000-0003-1652-8005, D. Maisuzenko^380000-0001-5704-3499, M.W. Majewski^34, J.J. Malczewski^350000-0003-2744-3656, S. Malde^580000-0002-8179-0707, B. Malecki^35,430000-0003-0062-1985, A. Malinin^380000-0002-3731-9977, T. Maltsev^380000-0002-2120-5633, G. Manca^27,h0000-0003-1960-4413, G. Mancinelli^100000-0003-1144-3678, C. Mancuso^11,25,l0000-0002-2490-435X, R. Manera Escalero^40, D. Manuzzi^200000-0002-9915-6587, C.A. Manzari^450000-0001-8114-3078, D. Marangotto^25,l0000-0001-9099-4878, J.F. Marchand^80000-0002-4111-0797, U. Marconi^200000-0002-5055-7224, S. Mariani^430000-0002-7298-3101, C. Marin Benito^400000-0003-0529-6982, J. Marks^170000-0002-2867-722X, A.M. Marshall^490000-0002-9863-4954, P.J. Marshall^55, G. Martelli^73,p0000-0002-6150-3168, G. Martellotti^300000-0002-8663-9037, L. Martinazzoli^43,m0000-0002-8996-795X, M. Martinelli^26,m0000-0003-4792-9178, D. Martinez Santos^410000-0002-6438-4483, F. Martinez Vidal^420000-0001-6841-6035, A. Massafferri^10000-0002-3264-3401, M. Materok^140000-0002-7380-6190, R. Matev^430000-0001-8713-6119, A. Mathad^450000-0002-9428-4715, V. Matiunin^380000-0003-4665-5451, C. Matteuzzi^63,260000-0002-4047-4521, K.R. Mattioli^120000-0003-2222-7727, A. Mauri^560000-0003-1664-8963, E. Maurice^120000-0002-7366-4364, J. Mauricio^400000-0002-9331-1363, M. Mazurek^430000-0002-3687-9630, M. McCann^560000-0002-3038-7301, L. Mcconnell^180009-0004-7045-2181, T.H. McGrath^570000-0001-8993-3234, N.T. McHugh^540000-0002-5477-3995, A. McNab^570000-0001-5023-2086, R. McNulty^180000-0001-7144-0175, B. Meadows^600000-0002-1947-8034, G. Meier^150000-0002-4266-1726, D. Melnychuk^360000-0003-1667-7115, S. Meloni^26,m0000-0003-1836-0189, M. Merk^32,750000-0003-0818-4695, A. Merli^25,l0000-0002-0374-5310, L. Meyer Garcia^20000-0002-2622-8551, D. Miao^4,60000-0003-4232-5615, H. Miao^60000-0002-1936-5400, M. Mikhasenko^71,d0000-0002-6969-2063, D.A. Milanes^700000-0001-7450-1121, M. Milovanovic^430000-0003-1580-0898, M.-N. Minard^8,†, A. Minotti^26,m0000-0002-0091-5177, E. Minucci^630000-0002-3972-6824, T. Miralles^90000-0002-4018-1454, S.E. Mitchell^530000-0002-7956-054X, B. Mitreska^150000-0002-1697-4999, D.S. Mitzel^150000-0003-3650-2689, A. Modak^520000-0003-1198-1441, A. Mödden ^150009-0009-9185-4901, R.A. Mohammed^580000-0002-3718-4144, R.D. Moise^140000-0002-5662-8804, S. Mokhnenko^380000-0002-1849-1472, T. Mombächer^410000-0002-5612-979X, M. Monk^51,640000-0003-0484-0157, I.A. Monroy^700000-0001-8742-0531, S. Monteil^90000-0001-5015-3353, G. Morello^230000-0002-6180-3697, M.J. Morello^29,q0000-0003-4190-1078, M.P. Morgenthaler^170000-0002-7699-5724, J. Moron^340000-0002-1857-1675, A.B. Morris^430000-0002-0832-9199, A.G. Morris^100000-0001-6644-9888, R. Mountain^630000-0003-1908-4219, H. Mu^30000-0001-9720-7507, E. Muhammad^510000-0001-7413-5862, F. Muheim^530000-0002-1131-8909, M. Mulder^740000-0001-6867-8166, K. Müller^450000-0002-5105-1305, D. Murray^570000-0002-5729-8675, R. Murta^560000-0002-6915-8370, P. Muzzetto^27,h0000-0003-3109-3695, P. Naik^490000-0001-6977-2971, T. Nakada^440009-0000-6210-6861, R. Nandakumar^520000-0002-6813-6794, T. Nanut^430000-0002-5728-9867, I. Nasteva^20000-0001-7115-7214, M. Needham^530000-0002-8297-6714, N. Neri^25,l0000-0002-6106-3756, S. Neubert^710000-0002-0706-1944, N. Neufeld^430000-0003-2298-0102, P. Neustroev^38, R. Newcombe^56, J. Nicolini^15,110000-0001-9034-3637, D. Nicotra^750000-0001-7513-3033, E.M. Niel^440000-0002-6587-4695, S. Nieswand^14, N. Nikitin^380000-0003-0215-1091, N.S. Nolte^590000-0003-2536-4209, C. Normand^8,h,270000-0001-5055-7710, J. Novoa Fernandez^410000-0002-1819-1381, G.N Nowak^600000-0003-4864-7164, C. Nunez^780000-0002-2521-9346, A. Oblakowska-Mucha^340000-0003-1328-0534, V. Obraztsov^380000-0002-0994-3641, T. Oeser^140000-0001-7792-4082, S. Okamura^21,i0000-0003-1229-3093, R. Oldeman^27,h0000-0001-6902-0710, F. Oliva^530000-0001-7025-3407, C.J.G. Onderwater^740000-0002-2310-4166, R.H. O'Neil^530000-0002-9797-8464, J.M. Otalora Goicochea^20000-0002-9584-8500, T. Ovsiannikova^380000-0002-3890-9426, P. Owen^450000-0002-4161-9147, A. Oyanguren^420000-0002-8240-7300, O. Ozcelik^530000-0003-3227-9248, K.O. Padeken^710000-0001-7251-9125, B. Pagare^510000-0003-3184-1622, P.R. Pais^430009-0005-9758-742X, T. Pajero^580000-0001-9630-2000, A. Palano^190000-0002-6095-9593, M. Palutan^230000-0001-7052-1360, G. Panshin^380000-0001-9163-2051, L. Paolucci^510000-0003-0465-2893, A. Papanestis^520000-0002-5405-2901, M. Pappagallo^19,f0000-0001-7601-5602, L.L. Pappalardo^21,i0000-0002-0876-3163, C. Pappenheimer^600000-0003-0738-3668, W. Parker^610000-0001-9479-1285, C. Parkes^570000-0003-4174-1334, B. Passalacqua^210000-0003-3643-7469, G. Passaleva^220000-0002-8077-8378, A. Pastore^190000-0002-5024-3495, M. Patel^560000-0003-3871-5602, C. Patrignani^20,g0000-0002-5882-1747, C.J. Pawley^750000-0001-9112-3724, A. Pellegrino^320000-0002-7884-345X, M. Pepe Altarelli^430000-0002-1642-4030, S. Perazzini^200000-0002-1862-7122, D. Pereima^380000-0002-7008-8082, A. Pereiro Castro^410000-0001-9721-3325, P. Perret^90000-0002-5732-4343, K. Petridis^490000-0001-7871-5119, A. Petrolini^24,k0000-0003-0222-7594, S. Petrucci^530000-0001-8312-4268, M. Petruzzo^250000-0001-8377-149X, H. Pham^630000-0003-2995-1953, A. Philippov^380000-0002-5103-8880, R. Piandani^60000-0003-2226-8924, L. Pica^29,q0000-0001-9837-6556, M. Piccini^730000-0001-8659-4409, B. Pietrzyk^80000-0003-1836-7233, G. Pietrzyk^110000-0001-9622-820X, D. Pinci^300000-0002-7224-9708, F. Pisani^430000-0002-7763-252X, M. Pizzichemi^26,m,430000-0001-5189-230X, V. Placinta^370000-0003-4465-2441, J. Plews^480009-0009-8213-7265, M. Plo Casasus^410000-0002-2289-918X, F. Polci^13,430000-0001-8058-0436, M. Poli Lener^230000-0001-7867-1232, A. Poluektov^100000-0003-2222-9925, N. Polukhina^380000-0001-5942-1772, I. Polyakov^430000-0002-6855-7783, E. Polycarpo^20000-0002-4298-5309, S. Ponce^430000-0002-1476-7056, D. Popov^6,430000-0002-8293-2922, S. Poslavskii^380000-0003-3236-1452, K. Prasanth^350000-0001-9923-0938, L. Promberger^170000-0003-0127-6255, C. Prouve^410000-0003-2000-6306, V. Pugatch^470000-0002-5204-9821, V. Puill^110000-0003-0806-7149, G. Punzi^29,r0000-0002-8346-9052, H.R. Qi^30000-0002-9325-2308, W. Qian^60000-0003-3932-7556, N. Qin^30000-0001-8453-658X, S. Qu^30000-0002-7518-0961, R. Quagliani^440000-0002-3632-2453, N.V. Raab^180000-0002-3199-2968, B. Rachwal^340000-0002-0685-6497, J.H. Rademacker^490000-0003-2599-7209, R. Rajagopalan^63, M. Rama^290000-0003-3002-4719, M. Ramos Pernas^510000-0003-1600-9432, M.S. Rangel^20000-0002-8690-5198, F. Ratnikov^380000-0003-0762-5583, G. Raven^330000-0002-2897-5323, M. Rebollo De Miguel^420000-0002-4522-4863, F. Redi^430000-0001-9728-8984, J. Reich^490000-0002-2657-4040, F. Reiss^570000-0002-8395-7654, Z. Ren^30000-0001-9974-9350, P.K. Resmi^580000-0001-9025-2225, R. Ribatti^29,q0000-0003-1778-1213, A.M. Ricci^270000-0002-8816-3626, S. Ricciardi^520000-0002-4254-3658, K. Richardson^590000-0002-6847-2835, M. Richardson-Slipper^530000-0002-2752-001X, K. Rinnert^550000-0001-9802-1122, P. Robbe^110000-0002-0656-9033, G. Robertson^530000-0002-7026-1383, E. Rodrigues^55,430000-0003-2846-7625, E. Rodriguez Fernandez^410000-0002-3040-065X, J.A. Rodriguez Lopez^700000-0003-1895-9319, E. Rodriguez Rodriguez^410000-0002-7973-8061, D.L. Rolf^430000-0001-7908-7214, A. Rollings^580000-0002-5213-3783, P. Roloff^430000-0001-7378-4350, V. Romanovskiy^380000-0003-0939-4272, M. Romero Lamas^410000-0002-1217-8418, A. Romero Vidal^410000-0002-8830-1486, M. Rotondo^230000-0001-5704-6163, M.S. Rudolph^630000-0002-0050-575X, T. Ruf^430000-0002-8657-3576, R.A. Ruiz Fernandez^410000-0002-5727-4454, J. Ruiz Vidal^42, A. Ryzhikov^380000-0002-3543-0313, J. Ryzka^340000-0003-4235-2445, J.J. Saborido Silva^410000-0002-6270-130X, N. Sagidova^380000-0002-2640-3794, N. Sahoo^480000-0001-9539-8370, B. Saitta^27,h0000-0003-3491-0232, M. Salomoni^430009-0007-9229-653X, C. Sanchez Gras^320000-0002-7082-887X, I. Sanderswood^420000-0001-7731-6757, R. Santacesaria^300000-0003-3826-0329, C. Santamarina Rios^410000-0002-9810-1816, M. Santimaria^230000-0002-8776-6759, L. Santoro ^10000-0002-2146-2648, E. Santovetti^310000-0002-5605-1662, D. Saranin^380000-0002-9617-9986, G. Sarpis^530000-0003-1711-2044, M. Sarpis^710000-0002-6402-1674, A. Sarti^300000-0001-5419-7951, C. Satriano^30,s0000-0002-4976-0460, A. Satta^310000-0003-2462-913X, M. Saur^50000-0001-8752-4293, D. Savrina^380000-0001-8372-6031, H. Sazak^90000-0003-2689-1123, L.G. Scantlebury Smead^580000-0001-8702-7991, A. Scarabotto^130000-0003-2290-9672, S. Schael^140000-0003-4013-3468, S. Scherl^550000-0003-0528-2724, A. M.  Schertz^720000-0002-6805-4721, M. Schiller^540000-0001-8750-863X, H. Schindler^430000-0002-1468-0479, M. Schmelling^160000-0003-3305-0576, B. Schmidt^430000-0002-8400-1566, S. Schmitt^140000-0002-6394-1081, O. Schneider^440000-0002-6014-7552, A. Schopper^430000-0002-8581-3312, M. Schubiger^320000-0001-9330-1440, N. Schulte^150000-0003-0166-2105, S. Schulte^440009-0001-8533-0783, M.H. Schune^110000-0002-3648-0830, R. Schwemmer^430009-0005-5265-9792, G. Schwering^140000-0003-1731-7939, B. Sciascia^230000-0003-0670-006X, A. Sciuccati^430000-0002-8568-1487, S. Sellam^410000-0003-0383-1451, A. Semennikov^380000-0003-1130-2197, M. Senghi Soares^330000-0001-9676-6059, A. Sergi^24,k0000-0001-9495-6115, N. Serra^450000-0002-5033-0580, L. Sestini^280000-0002-1127-5144, A. Seuthe^150000-0002-0736-3061, Y. Shang^50000-0001-7987-7558, D.M. Shangase^780000-0002-0287-6124, M. Shapkin^380000-0002-4098-9592, I. Shchemerov^380000-0001-9193-8106, L. Shchutska^440000-0003-0700-5448, T. Shears^550000-0002-2653-1366, L. Shekhtman^380000-0003-1512-9715, Z. Shen^50000-0003-1391-5384, S. Sheng^4,60000-0002-1050-5649, V. Shevchenko^380000-0003-3171-9125, B. Shi^60000-0002-5781-8933, E.B. Shields^26,m0000-0001-5836-5211, Y. Shimizu^110000-0002-4936-1152, E. Shmanin^380000-0002-8868-1730, R. Shorkin^380000-0001-8881-3943, J.D. Shupperd^630009-0006-8218-2566, B.G. Siddi^21,i0000-0002-3004-187X, R. Silva Coutinho^630000-0002-1545-959X, G. Simi^280000-0001-6741-6199, S. Simone^19,f0000-0003-3631-8398, M. Singla^640000-0003-3204-5847, N. Skidmore^570000-0003-3410-0731, R. Skuza^170000-0001-6057-6018, T. Skwarnicki^630000-0002-9897-9506, M.W. Slater^480000-0002-2687-1950, J.C. Smallwood^580000-0003-2460-3327, J.G. Smeaton^500000-0002-8694-2853, E. Smith^590000-0002-9740-0574, K. Smith^620000-0002-1305-3377, M. Smith^560000-0002-3872-1917, A. Snoch^320000-0001-6431-6360, L. Soares Lavra^90000-0002-2652-123X, M.D. Sokoloff^600000-0001-6181-4583, F.J.P. Soler^540000-0002-4893-3729, A. Solomin^38,490000-0003-0644-3227, A. Solovev^380000-0003-4254-6012, I. Solovyev^380000-0003-4254-6012, R. Song^640000-0002-8854-8905, F.L. Souza De Almeida^20000-0001-7181-6785, B. Souza De Paula^20009-0003-3794-3408, E. Spadaro Norella^25,l0000-0002-1111-5597, E. Spedicato^200000-0002-4950-6665, J.G. Speer^150000-0002-6117-7307, E. Spiridenkov^38, P. Spradlin^540000-0002-5280-9464, V. Sriskaran^430000-0002-9867-0453, F. Stagni^430000-0002-7576-4019, M. Stahl^430000-0001-8476-8188, S. Stahl^430000-0002-8243-400X, S. Stanislaus^580000-0003-1776-0498, E.N. Stein^430000-0001-5214-8865, O. Steinkamp^450000-0001-7055-6467, O. Stenyakin^38, H. Stevens^150000-0002-9474-9332, D. Strekalina^380000-0003-3830-4889, Y.S Su^60000-0002-2739-7453, F. Suljik^580000-0001-6767-7698, J. Sun^270000-0002-6020-2304, L. Sun^690000-0002-0034-2567, Y. Sun^610000-0003-4933-5058, P.N. Swallow^480000-0003-2751-8515, K. Swientek^340000-0001-6086-4116, A. Szabelski^360000-0002-6604-2938, T. Szumlak^340000-0002-2562-7163, M. Szymanski^430000-0002-9121-6629, Y. Tan^30000-0003-3860-6545, S. Taneja^570000-0001-8856-2777, M.D. Tat^580000-0002-6866-7085, A. Terentev^450000-0003-2574-8560, F. Teubert^430000-0003-3277-5268, E. Thomas^430000-0003-0984-7593, D.J.D. Thompson^480000-0003-1196-5943, H. Tilquin^560000-0003-4735-2014, V. Tisserand^90000-0003-4916-0446, S. T'Jampens^80000-0003-4249-6641, M. Tobin^40000-0002-2047-7020, L. Tomassetti^21,i0000-0003-4184-1335, G. Tonani^25,l0000-0001-7477-1148, X. Tong^50000-0002-5278-1203, D. Torres Machado^10000-0001-7030-6468, L. Toscano^150009-0007-5613-6520, D.Y. Tou^30000-0002-4732-2408, C. Trippl^440000-0003-3664-1240, G. Tuci^170000-0002-0364-5758, N. Tuning^320000-0003-2611-7840, A. Ukleja^360000-0003-0480-4850, D.J. Unverzagt^170000-0002-1484-2546, A. Usachov^330000-0002-5829-6284, A. Ustyuzhanin^380000-0001-7865-2357, U. Uwer^170000-0002-8514-3777, V. Vagnoni^200000-0003-2206-311X, A. Valassi^430000-0001-9322-9565, G. Valenti^200000-0002-6119-7535, N. Valls Canudas^390000-0001-8748-8448, M. Van Dijk^440000-0003-2538-5798, H. Van Hecke^620000-0001-7961-7190, E. van Herwijnen^560000-0001-8807-8811, C.B. Van Hulse^41,v0000-0002-5397-6782, M. van Veghel^320000-0001-6178-6623, R. Vazquez Gomez^400000-0001-5319-1128, P. Vazquez Regueiro^410000-0002-0767-9736, C. Vázquez Sierra^410000-0002-5865-0677, S. Vecchi^210000-0002-4311-3166, J.J. Velthuis^490000-0002-4649-3221, M. Veltri^22,u0000-0001-7917-9661, A. Venkateswaran^440000-0001-6950-1477, M. Vesterinen^510000-0001-7717-2765, D.  Vieira^600000-0001-9511-2846, M. Vieites Diaz^440000-0002-0944-4340, X. Vilasis-Cardona^390000-0002-1915-9543, E. Vilella Figueras^550000-0002-7865-2856, A. Villa^200000-0002-9392-6157, P. Vincent^130000-0002-9283-4541, F.C. Volle^110000-0003-1828-3881, D. vom Bruch^100000-0001-9905-8031, V. Vorobyev^38, N. Voropaev^380000-0002-2100-0726, K. Vos^750000-0002-4258-4062, C. Vrahas^530000-0001-6104-1496, J. Walsh^290000-0002-7235-6976, E.J. Walton^640000-0001-6759-2504, G. Wan^50000-0003-0133-1664, C. Wang^170000-0002-5909-1379, G. Wang^70000-0001-6041-115X, J. Wang^50000-0001-7542-3073, J. Wang^40000-0002-6391-2205, J. Wang^30000-0002-3281-8136, J. Wang^690000-0001-6711-4465, M. Wang^250000-0003-4062-710X, R. Wang^490000-0002-2629-4735, X. Wang^670000-0002-2399-7646, Y. Wang^70000-0003-3979-4330, Z. Wang^450000-0002-5041-7651, Z. Wang^30000-0003-0597-4878, Z. Wang^60000-0003-4410-6889, J.A. Ward^51,640000-0003-4160-9333, N.K. Watson^480000-0002-8142-4678, D. Websdale^560000-0002-4113-1539, Y. Wei^50000-0001-6116-3944, B.D.C. Westhenry^490000-0002-4589-2626, D.J. White^570000-0002-5121-6923, M. Whitehead^540000-0002-2142-3673, A.R. Wiederhold^510000-0002-1023-1086, D. Wiedner^150000-0002-4149-4137, G. Wilkinson^580000-0001-5255-0619, M.K. Wilkinson^600000-0001-6561-2145, I. Williams^50, M. Williams^590000-0001-8285-3346, M.R.J. Williams^530000-0001-5448-4213, R. Williams^500000-0002-2675-3567, F.F. Wilson^520000-0002-5552-0842, W. Wislicki^360000-0001-5765-6308, M. Witek^350000-0002-8317-385X, L. Witola^170000-0001-9178-9921, C.P. Wong^620000-0002-9839-4065, G. Wormser^110000-0003-4077-6295, S.A. Wotton^500000-0003-4543-8121, H. Wu^630000-0002-9337-3476, J. Wu^70000-0002-4282-0977, Y. Wu^50000-0003-3192-0486, K. Wyllie^430000-0002-2699-2189, Z. Xiang^60000-0002-9700-3448, Y. Xie^70000-0001-5012-4069, A. Xu^50000-0002-8521-1688, J. Xu^60000-0001-6950-5865, L. Xu^30000-0003-2800-1438, L. Xu^30000-0002-0241-5184, M. Xu^510000-0001-8885-565X, Q. Xu^6, Z. Xu^90000-0002-7531-6873, Z. Xu^60000-0001-9558-1079, Z. Xu^40000-0001-9602-4901, D. Yang^30009-0002-2675-4022, S. Yang^60000-0003-2505-0365, X. Yang^50000-0002-7481-3149, Y. Yang^60000-0002-8917-2620, Z. Yang^50000-0003-2937-9782, Z. Yang^610000-0003-0572-2021, V. Yeroshenko^110000-0002-8771-0579, H. Yeung^570000-0001-9869-5290, H. Yin^70000-0001-6977-8257, J. Yu^660000-0003-1230-3300, X. Yuan^630000-0003-0468-3083, E. Zaffaroni^440000-0003-1714-9218, M. Zavertyaev^160000-0002-4655-715X, M. Zdybal^350000-0002-1701-9619, M. Zeng^30000-0001-9717-1751, C. Zhang^50000-0002-9865-8964, D. Zhang^70000-0002-8826-9113, J. Zhang^60000-0001-6010-8556, L. Zhang^30000-0003-2279-8837, S. Zhang^660000-0002-9794-4088, S. Zhang^50000-0002-2385-0767, Y. Zhang^50000-0002-0157-188X, Y. Zhang^58, Y. Zhao^170000-0002-8185-3771, A. Zharkova^380000-0003-1237-4491, A. Zhelezov^170000-0002-2344-9412, Y. Zheng^60000-0003-0322-9858, T. Zhou^50000-0002-3804-9948, X. Zhou^70009-0005-9485-9477, Y. Zhou^60000-0003-2035-3391, V. Zhovkovska^110000-0002-9812-4508, X. Zhu^30000-0002-9573-4570, X. Zhu^70000-0002-4485-1478, Z. Zhu^60000-0002-9211-3867, V. Zhukov^14,380000-0003-0159-291X, J. Zhuo^420000-0002-6227-3368, Q. Zou^4,60000-0003-0038-5038, S. Zucchelli^20,g0000-0002-2411-1085, D. Zuliani^280000-0002-1478-4593, G. Zunica^570000-0002-5972-6290. ^1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil ^2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil ^3Center for High Energy Physics, Tsinghua University, Beijing, China ^4Institute Of High Energy Physics (IHEP), Beijing, China ^5School of Physics State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China ^6University of Chinese Academy of Sciences, Beijing, China ^7Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China ^8Université Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy, France ^9Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France ^10Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France ^11Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France ^12Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris, Palaiseau, France ^13LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3, Paris, France ^14I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany ^15Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany ^16Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany ^17Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany ^18School of Physics, University College Dublin, Dublin, Ireland ^19INFN Sezione di Bari, Bari, Italy ^20INFN Sezione di Bologna, Bologna, Italy ^21INFN Sezione di Ferrara, Ferrara, Italy ^22INFN Sezione di Firenze, Firenze, Italy ^23INFN Laboratori Nazionali di Frascati, Frascati, Italy ^24INFN Sezione di Genova, Genova, Italy ^25INFN Sezione di Milano, Milano, Italy ^26INFN Sezione di Milano-Bicocca, Milano, Italy ^27INFN Sezione di Cagliari, Monserrato, Italy ^28Università degli Studi di Padova, Università e INFN, Padova, Padova, Italy ^29INFN Sezione di Pisa, Pisa, Italy ^30INFN Sezione di Roma La Sapienza, Roma, Italy ^31INFN Sezione di Roma Tor Vergata, Roma, Italy ^32Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands ^33Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands ^34AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland ^35Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland ^36National Center for Nuclear Research (NCBJ), Warsaw, Poland ^37Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania ^38Affiliated with an institute covered by a cooperation agreement with CERN ^39DS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain ^40ICCUB, Universitat de Barcelona, Barcelona, Spain ^41Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain ^42Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain ^43European Organization for Nuclear Research (CERN), Geneva, Switzerland ^44Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland ^45Physik-Institut, Universität Zürich, Zürich, Switzerland ^46NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine ^47Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine ^48University of Birmingham, Birmingham, United Kingdom ^49H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom ^50Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom ^51Department of Physics, University of Warwick, Coventry, United Kingdom ^52STFC Rutherford Appleton Laboratory, Didcot, United Kingdom ^53School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom ^54School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom ^55Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom ^56Imperial College London, London, United Kingdom ^57Department of Physics and Astronomy, University of Manchester, Manchester, United Kingdom ^58Department of Physics, University of Oxford, Oxford, United Kingdom ^59Massachusetts Institute of Technology, Cambridge, MA, United States ^60University of Cincinnati, Cincinnati, OH, United States ^61University of Maryland, College Park, MD, United States ^62Los Alamos National Laboratory (LANL), Los Alamos, NM, United States ^63Syracuse University, Syracuse, NY, United States ^64School of Physics and Astronomy, Monash University, Melbourne, Australia, associated to ^51 ^65Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to ^2 ^66Physics and Micro Electronic College, Hunan University, Changsha City, China, associated to ^7 ^67Guangdong Provincial Key Laboratory of Nuclear Science, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Institute of Quantum Matter, South China Normal University, Guangzhou, China, associated to ^3 ^68Lanzhou University, Lanzhou, China, associated to ^4 ^69School of Physics and Technology, Wuhan University, Wuhan, China, associated to ^3 ^70Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to ^13 ^71Universität Bonn - Helmholtz-Institut für Strahlen und Kernphysik, Bonn, Germany, associated to ^17 ^72Eotvos Lorand University, Budapest, Hungary, associated to ^43 ^73INFN Sezione di Perugia, Perugia, Italy, associated to ^21 ^74Van Swinderen Institute, University of Groningen, Groningen, Netherlands, associated to ^32 ^75Universiteit Maastricht, Maastricht, Netherlands, associated to ^32 ^76Faculty of Material Engineering and Physics, Cracow, Poland, associated to ^35 ^77Department of Physics and Astronomy, Uppsala University, Uppsala, Sweden, associated to ^54 ^78University of Michigan, Ann Arbor, MI, United States, associated to ^63 ^aUniversidade de Brasília, Brasília, Brazil ^bCentral South U., Changsha, China ^cHangzhou Institute for Advanced Study, UCAS, Hangzhou, China ^dExcellence Cluster ORIGINS, Munich, Germany ^eUniversidad Nacional Autónoma de Honduras, Tegucigalpa, Honduras ^fUniversità di Bari, Bari, Italy ^gUniversità di Bologna, Bologna, Italy ^hUniversità di Cagliari, Cagliari, Italy ^iUniversità di Ferrara, Ferrara, Italy ^jUniversità di Firenze, Firenze, Italy ^kUniversità di Genova, Genova, Italy ^lUniversità degli Studi di Milano, Milano, Italy ^mUniversità di Milano Bicocca, Milano, Italy ^nUniversità di Modena e Reggio Emilia, Modena, Italy ^oUniversità di Padova, Padova, Italy ^pUniversità di Perugia, Perugia, Italy ^qScuola Normale Superiore, Pisa, Italy ^rUniversità di Pisa, Pisa, Italy ^sUniversità della Basilicata, Potenza, Italy ^tUniversità di Roma Tor Vergata, Roma, Italy ^uUniversità di Urbino, Urbino, Italy ^vUniversidad de Alcalá, Alcalá de Henares , Spain ^wUniversidade da Coruña, Coruña, Spain ^†Deceased
http://arxiv.org/abs/2306.06576v2
20230611034449
Resonant dynamics of extreme mass-ratio inspirals in a perturbed Kerr spacetime
[ "Zhen Pan", "Huan Yang", "Laura Bernard", "Béatrice Bonga" ]
gr-qc
[ "gr-qc" ]
[email protected] Lee Institute, Shanghai Jiao-Tong University, Shanghai, 520 Shengrong Road, 201210, People’s Republic of ChinaSchool of Physics & Astronomy, Shanghai Jiao-Tong University, Shanghai, 800 Dongchuan Road, 200240, People’s Republic of ChinaPerimeter Institute for Theoretical Physics, Ontario, N2L 2Y5, [email protected] Institute for Theoretical Physics, Ontario, N2L 2Y5, CanadaUniversity of Guelph, Guelph, Ontario N1G 2W1, CanadaLaboratoire Univers et Théories, Observatoire de Paris, Université PSL, Université Paris Cité, CNRS, F-92190 Meudon, FranceInstitute for Mathematics, Astrophysics and Particle Physics, Radboud University, 6525 AJ Nijmegen, The Netherlands Extreme mass-ratio inspirals (EMRI) are one of the most sensitive probes of black hole spacetimes with gravitational wave measurements. In this work, we systematically analyze the dynamics of an EMRI system near orbital resonances, assuming the background spacetime is weakly perturbed from Kerr. Using the action-angle formalism, we have derived an effective resonant Hamiltonian that describes the dynamics of the resonant degree of freedom, for the case that the EMRI motion across the resonance regime. This effective resonant Hamiltonian can also be used to derive the condition that the trajectory enters/exits a resonant island and the permanent change of action variables across the resonance with the gravitational wave radiation turned on. The orbital chaos, on the other hand, generally leads to transitions between different branches of rotational orbits with finite changes of the action variables. These findings are demonstrated with numerical orbital evolutions that are mapped into representations using action-angle variables. This study is one part of the program of understanding EMRI dynamics in a generic perturbed Kerr spacetime, which paves the way of using EMRIs to precisely measure the black hole spacetime. Resonant dynamics of extreme mass-ratio inspirals in a perturbed Kerr spacetime Béatrice Bonga July 31, 2023 =============================================================================== § INTRODUCTION An extreme mass-ratio inspiral (EMRI) system comprises a supermassive black hole and a stellar-mass compact object, i.e., a black hole or a neutron star <cit.>. Together with massive black hole binaries, EMRIs are commonly believed to be the main extragalactic transient gravitational-wave sources for spaceborne gravitational wave detectors, such as LISA, Taiji and Tianqin <cit.>. Their formation can be classified into two distinct channels: one associated with multi-body scattering in nuclear star clusters (“dry channel") <cit.> and the other associated with accretion-assisted migration (“wet channel") <cit.>. It has been shown that accretion disks dramatically boots the EMRI formation rate, such that the wet EMRIs maybe more common for space-borne gravitational-wave detection <cit.>. There are other formation mechanisms proposed, such as the destruction of a stellar-mass black hole binary in the vicinity of a supermassive black hole <cit.>, but the rate is rather uncertain. Recently, there is a proposal suggesting enhanced EMRI formation rates in near supermassive black hole binaries <cit.>, although concerns have been raised regarding the supply of stellar-mass black holes <cit.>. EMRIs have a wide range of astrophysical applications. First, as the mass and spin of the host massive black hole can be measured accurately (down to percent-level or better), a catalog of EMRI events may be used to infer the distribution of massive black holes within 10^5-10^7 M_⊙, which helps to understand the growth mechanisms of massive black holes. Secondly, as wet EMRIs are generally accompanied by active galactic nuclei (AGN), they are ideal candidates for multi-messenger observations, which are particularly useful for studying the accretion physics. Thirdly, the less-massive object in an EMRI system may be a mass-gap object <cit.> (similar to the one detected in GW190814) or a primordial black hole <cit.>, so that the EMRI observations may be used to probe the existence of these objects, determining their abundance and diagnosing their formation mechanisms. EMRIs also have important applications in fundamental physics, including testing strong-field predictions of General Relativity. They generally have a superior power in detecting weak environmental forces because of the large number (10^4-10^5) of orbital cycles in-band, so that weak effects may be amplified to achieve detectable gravitational wave phase shifts. These environmental forces may come from the tidal gravitational field of a third stellar-mass object <cit.>, the migration force from an accretion disk and/or the interaction between the stellar-mass black hole and a possible dark-matter cloud <cit.>. From the perspective of testing General Relativity, it is interesting to test the Kerr metric as a key prediction of General Relativity for rotating black holes, as modified gravity theories may predict different black hole spacetimes (e.g. Einstein-dilaton-Gauss-Bonnet <cit.>, Dynamical Chern-Simons Gravity <cit.> and effective field theory extensions of General Relativity <cit.>). If the central body is a black hole mimicker such as a boson star, a gravastar and/or a wormhole <cit.> (despite possible issues with stability <cit.>), the external metric may also be different from Kerr. A related, important question is: If the background spacetime is g=g_ Kerr+h (assuming |h|≪ 1), how do we use EMRIs to probe/constrain h? Notice that the presence of h not only modifies the EMRI dynamics as an additional force, but also the radiated flux with respect to the same trajectory. To understand the EMRI evolution within the background spacetime described by the metric g=g_ Kerr+h (assuming h is stationary), we can separate the EMRI orbit into non-resonant and resonant regimes. In the non-resonant regime, the metric perturbation h introduces an extra conservative force, and the EMRI orbit oscillates around the Kerr geodesics according to the Kolmogorov–Arnold–Moser (KAM) theorem, i.e., there is no orbital chaos. With the gravitational wave radiation included, we can intuitively argue that the conservative quantities are relatively shifted by 𝒪(h) and the radiated flux is relatively modified by 𝒪(h), so that the resulting overall gravitational wave phase shift is δΨ_ non-res∼1/q×𝒪(h) where q (∼ 10^-4 -10^-6) is the EMRI mass ratio and 1/q represents the number of cycles in-band. Notice that although both q,|h|≪ 1, |h| can be larger than q so that δΨ≥ 1, which will be observable by space-borne detectors. Here, we will be particularly interested in the scenario with |h|>q. In order to describe the long-term secular evolution in the non-resonant regime, one needs to work out the modified radiation flux with the modified Teukolsky equation, and determine how to relate the radiated flux to orbital quantities in the modified spacetime. We shall present this part of the analysis in a separate work. In the resonant regime, i.e., k Ω^r+m Ω^ϕ+n Ω^θ≈ 0 for k,m,n ∈ℤ where Ω^r,θ,ϕ are geodesic orbital frequencies in r,θ,ϕ directions, the KAM theorem no longer applies and orbital chaos can occur. In fact, signatures of chaotic orbits near resonances have been observed in studies of various modified Kerr spacetimes <cit.>, e.g. in the vertical jumps of rotation numbers and volume-filling features in the phase space of the trajectory. A plateau in rotation number is sometimes observed, which should be associated with the resonant islands in the phase space. Despite this progress in understanding the phenomenology of EMRI resonant behavior in various specific spacetimes, a mathematical, universal framework for generic perturbed Kerr spacetimes is still lacking. More importantly, in order to allow gravitational wave measurements to probe the spacetime perturbation h, one needs to assess the impact of h in the resonant regime for the long-term EMRI evolution, which is an important goal of this work. Since we are working in the regime that the perturbative force (due to h) is greater than the gravitational radiation reaction, we first solve the conservative dynamics of the associated spacetime g_ Kerr+h, and view the radiation reaction as a mapping between different geodesics associated with g_ Kerr+h. Using a method similar to the treatment of relativistic mean motion resonance in <cit.>, which in turn traces back to the analysis of sustained resonance for EMRIs with self-force considered (especially the procedure of applying Near Identity Transformations) <cit.>, we derive a general effective resonant Hamiltonian of the form ℋ_ eff = α_0 Θ +β_0 Θ^2 + ϵ∑_k H_k e^ik Q with Θ, Q being the canonical variables for the resonant degree of freedom (DOF), and relevant definition of other variables explicitly given in Sec. <ref>. This Hamiltonian governs the essential dynamics of the EMRI system within the resonant islands (commonly referred to as the “libration" regime for planetary systems), and it applies for generic metric perturbations h, including all the specific examples mentioned in previous studies <cit.>. The physical essence of this effective Hamiltonian — similar to the cases of mean motion resonances widely studied in planetary systems — is that there is a single resonant DOF that is slowly varying compared to other DOFs, so that in a “slow timescale" where other DOFs are averaged out, the system is described by such Hamiltonian. When the gravitational radiation reaction is turned on, the equations of motion for the resonant DOF should be correspondingly modified. Note that away from the chaotic regime, the discussion presented in <cit.> becomes particularly useful for analyzing the system's dynamics. [The original analysis and the scaling laws presented in <cit.> are developed for describing the self-force effect, but the method can be easily adapted to this work.] There are in general two outcomes as a system passes through the resonance regime under the influence of radiation reaction: transient passing and resonant trapping (dubbed as “sustained resonance" in <cit.>). The transient passing produces a long-term phase shift δΨ_ transient∼𝒪(h)/q^3/2 , which is larger than the non-resonant effects in the formal expansions. However, this does not necessarily mean that δΨ_ transient is numerically greater than δΨ_ non-res, because the values of the numerical coefficients play an important role in the overall amplitude, and these coefficients correspond to different terms in the harmonics expansion which could differ by orders of magnitude (see Eq. (<ref>)). In orbits near the resonant islands, the resonant angle Q is no longer bounded (the “rotation" regime), which couples with other non-resonant DOFs and gives rise to chaotic zones in the phase space. In Chapter 9.5 of <cit.>, a simple pendulum problem was discussed in the Hamiltonian language to illustrate the properties of chaos using “the Standard Map" method. It is plausible that chaos arises due to similar reasons here in the EMRI system, where the non-resonant terms effectively introduce time-dependent harmonics in the Hamiltonian. The resulting chaos may be analyzed using the (standard) algebraic map method. The width of chaotic zones is expected to be proportional to |h|^1/2, within which the orbital motion is chaotic and volume-filling in the phase space. In traditional plots for the rotation number ν (which is defined as the ratio of average frequencies in two different directions, e.g., ⟨Ω^r|/⟩⟨Ω^θ|$⟩), the chaotic zones correspond to a vertical discontinuity in the rotational number, as discussed in, e.g., <cit.>. The chaotic orbits in general contribute𝒪(h^1/2)changes in the action variablesJ_αduring transitions between different branches of the orbit (Figs. <ref> and <ref>), so that the long-term impact on the gravitational wave phase is δΨ_ chaos∝𝒪(h^1/2)/q , In principle, the chaotic orbit may also introduce transitions into the resonant islands, after which a sustained resonance is achieved. One criterion for generic resonance capture studied in a planetary system is that the capture only happens in the case of “converging" evolution, i.e., the ratio between the magnitude of frequencies is converging (towards one) in time <cit.>. However, the EMRI evolution is “diverging". Therefore, we believe that resonance capture into resonant islands is unlikely. This point of course requires further numerical confirmation in the future. To test the effective Hamiltonian description, we have developed numerical algorithms to do the full evolution across resonances. In particular, we have developed numerical algorithms to map the EMRI evolution using physical variables(r,θ,ϕ, p_r,r_θ,p_ϕ)to action-angle variables(q_r, q_θ, q_ϕ, J_r, J_θ, J_ϕ). A direct evolution using action-angle variables is thus possible, but in realistic implementations much more susceptible to computational errors because of the numerical transformations. As a result, we choose to numerically compute the long-term evolution of EMRIs near resonance using the physical variables, and map them to action-angle variables from time-to-time in order to compare to the analysis using the effective resonant Hamiltonian. The details of the transformation between the physical and action-angle variables are explained in Appendix <ref> and <ref>. The numerical algorithms and the effective Hamiltonian description should apply to general perturbed Kerr metrics. In this work, we use the spinning black hole solution in quadratic gravity <cit.> as an example, where the perturbed Kerr spacetime is still stationary and axisymmetric, and we indeed find decent agreement between the effective Hamiltonian description and the fully numerical evolution. The structure of this article is organized as follows. In Section <ref>, we first introduce action-angle variables and their usage in Hamiltonian systems, then explain why resonances reduce the number of DOFs, and near-resonance orbits are governed by an effective Hamiltonian. In Section <ref>, we numerically evolve a number of near-resonance (2/3resonance withΩ^r/Ω^θ≈2/3) orbits in physical coordinates(x,p)and illustrate various features of these orbits in terms of rotation curves and Poincaré maps. We then map(x, p)to action-angle variables(q, J), and to resonant variables(Q, Θ)via a Near Identity Transformation (NIT), and explain the features found using the effective Hamiltonian description. In Section <ref>, we calculate the impact of crossing resonances on the EMRI waveform using the effective Hamiltonian description. We conclude this paper in Section <ref>. In Appendix <ref>, we analyze the1/2resonance, compare it with the2/3resonance and illustrate the general features of crossing resonances. In Appendix <ref>, we summarize the necessary steps for mapping the physical coordinates to the action-angle variables. In Appendix <ref>, we show the details of performing NITs. In this work, we use the geometric unitsG=c=1, and set the supermassive black hole mass toM=1if not specified otherwise. § EFFECTIVE RESONANT HAMILTONIAN Let us consider a point particle's motion in the spacetime with metricg=g_ Kerr+h. The total Hamiltonian is given by ℋ = 1/2 g^αβ p_α p_β , and the magnitude of the total Hamiltonian isℋ= -μ^2/2, whereμis the particle's rest mass and the physical coordinates{ x^α, p_α}are canonical variables of the system. In order to separate out the dynamical effect due toh, we shall write the Hamiltonian as ℋ = 1/2 g_ Kerr^αβ p_α p_β +ϵ/2 h^αβ p_α p_β := ℋ_ Kerr +ϵ ℋ_ int , whereh^αβ = -g^αμ_ Kerr g^βν_ Kerr h_μνandϵis a bookkeeping index. Considering the case withh=0, i.e., the Kerr spacetime, one can generally find a canonical transformation to map the physical coordinates to the action-angle variables: 𝒥_α = 𝒥_α({ x^β, p_β}), q^α = q^α({ x^β, p_β}) . Even though this canonical transformation is obtained assuming the spacetime is Kerr, in the general case thathis nonzero, it still gives rise to a set of canonical variables{𝒥_α,q^α}for the total Hamiltonianℋ. [However, in general, {𝒥_α,q^α} are not action-angle variables for the full Hamiltonian. ] In other words,{𝒥_α,q^α}satisfy d q^α/d τ = ∂ℋ/∂𝒥_α = ∂ℋ_ Kerr/∂𝒥_α +ϵ∂ℋ_ int/∂𝒥_α , d 𝒥_α/d τ = -∂ℋ/∂ q^α =-∂ℋ_ Kerr/∂ q^α-ϵ∂ℋ_ int/∂ q^α , whereτis the proper time of the particle. In addition, with the inverse transformation of Eq. (<ref>),ℋ_ Kerr({ x^α, p_α})may be re-written asℋ_ Kerr({𝒥_α,q^α}). Based on the definition of action-angle variables, the above equations can then be simplified as d q^α/d τ = Ω^α +ϵ∂ℋ_ int/∂𝒥_α , d 𝒥_α/d τ =-ϵ∂ℋ_ int/∂ q^α , whereΩ^α ({𝒥_β}) :=∂ℋ_ Kerr/∂𝒥_αare the “angular frequencies" for each angle variable in the Kerr spacetime. §.§ Effective Resonant Hamiltonian It is straightforward to see thathdrives the evolution of𝒥_α, according to Eq. (<ref>). Asℋ_ intis a function of{𝒥_α,q^α}, we Fourier decompose it as ℋ_ int = ∑_k,m,n H_k,m,n({𝒥_α}) e^i(k q^r+m q^ϕ+n q^θ) wherek,m,n ∈ℤandH_k,m,nis a function of{𝒥_α}. Asq̇^α∼Ω^α +𝒪(ϵ), the variation of𝒥_αis driven by oscillatory forcing terms with different frequencies. In the long term, these oscillatory forcing terms introduce no secular effects so that the trajectory varies around Kerr geodesics, and there is no chaos in the non-resonant regime (as to be expected from the KAM theorem). This observation also justifies using𝒥_αfor the long-term evolution of EMRIs with gravitational radiation reaction turned on: in the non-resonant regimehonly generates oscillations of𝒥_αaround its mean value⟨𝒥_α⟩, whereas the radiation reaction produces secular change of𝒥_α. In the resonant regime where the orbital frequencies are commensurateN_αΩ^α≈ 0for certain N=(N_r, N_θ, N_ϕ), we rewrite the equations of motion for{𝒥_α,q^α}as d q^α/d τ = Ω^α +ϵ∑_k ∈ℤ∂ H_k N_j/∂𝒥_α e^i k N_j q^j+ϵ∑_n_j ∈ R∂ H_n_j/∂𝒥_α e^i n_j q^j , d 𝒥_α/d τ =-iϵ∑_k ∈ℤ k N_α H_k N_j e^i k N_j q^j-iϵ∑_ n_j ∈ R n_α H_ n_j e^i n_j q^j wherej∈{r, θ, ϕ}andRis defined as the set of all non-resonant 3-tuples. In order to single out the effect of resonance in the presence of other fast-oscillatory non-resonant terms, we apply the technique of Near Identity Transformations as discussed in <cit.>, which is a transformation of the form q̃^α = q^α+ϵ L^α ({q^β, 𝒥_β})+𝒪(ϵ^2) , 𝒥̃_α = 𝒥_α +ϵ T_α({q^β, 𝒥_β})+𝒪(ϵ^2) , withL^α, T_αdefined as L^α = ∑_n_j ∈ Ri/n_j Ω_j∂ H_n_j/∂𝒥_α e^i n_j q^j , T_α = ∑_n_j ∈ Rn_α/n_j Ω_j H_n_je^i n_j q^j . The dynamical variablesq̃^α, 𝒥̃_αfollow a set of equations of motion that are free of the “contamination" of non-resonant terms: d q̃^α/d τ = Ω_α +ϵ∑_k ∈ℤ∂ H_k N_j/∂𝒥_α e^i k N_j q^j+𝒪(ϵ^2) , d 𝒥̃_α/d τ =-iϵ N_α∑_k ∈ℤ k H_k N_j e^i k N_j q^j +𝒪(ϵ^2) . In particular, the driving terms on the right-hand side of the equations always depend on a certain combination of angles:N_j q^j, which we shall define as the resonant angleQ:= N_j q^j. The rate of change ofQis slow as compared to the frequencies of the non-resonant terms: d Q/d τ = N_αΩ^α + ϵ∑_k ∈ℤ N_α∂ H_k N_j/∂𝒥_α e^i k N_j q^j becauseΔΩ :=N_αΩ^α +ϵ N_α∂ H_(0,0,0)/∂𝒥_α≈ 0based on the resonance assumption. On the other hand, according to the second line of Eq. (<ref>), the change rate of each𝒥̃_αis proportional to each other:𝒥̇̃̇_r : 𝒥̇̃̇_θ: 𝒥̇̃̇_ϕ= N_r :N_θ : N_ϕ. This means that we can write𝒥̃_αas 𝒥̃_α = N_αΘ +C_α whereΘcorresponds to the single dynamical “action" andC_αare constants related to the choice of initial conditions. The equation of motion forΘis just d Θ/d τ = -iϵ∑_k ∈ℤ k H_k N_j e^i k N_j q^j . In fact, combing Eq. (<ref>) and Eq. (<ref>), we can view{Q, Θ}as a set of canonical variables for the effective Hamiltonian ℋ_ eff = ∫ΔΩ d Θ + 2 ϵ∑_k ≥ 1 Re(H_k N) cos k Q -2 ϵ∑_k ≥ 1 Im(H_k N) sin k Q . BecauseΘis the only dynamical action after removing all non-resonant DOFs, we can expandΔΩin power laws ofΘas ΔΩ = α_0 +2β_0Θ +𝒪(Θ^2) and dropping the higher-order terms. Substituting this into Eq. (<ref>), we obtain the desired form of Eq. (<ref>). In reality, for smoothhwe usually find that one of harmonics dominate over the rest. Such harmonicsH_ resis likely much smaller then the amplitude ofhit self, ashis generally dominated by the zeroth harmonic for many models of modified black hole spacetimes. For the example perturbed Kerr spacetime we considered in this work, the perturbed Hamiltonian is dominated by theℋ_n_r, nθ=0andℋ_n_r, nθ=± 2components, where the former ones are non-resonant components. As a result, for the 2/3 resonance [with𝐍=(-3,2)× k] studied in the main text the dominant harmonics isk_* = ± 1and for the 1/2 harmonics [with𝐍=(-2,1)× k] discussed in the Appendix <ref> the dominant harmonics isk_* =± 2. In addition, we can always remove thesin Qterm by adding/subtracting a constant to the definition ofQ. In the end, the effective Hamiltonian simplifies to (H_ res=± 2 ϵ |H_ N|depending on the sign of Re(H_k N)): ℋ_ eff = α_0 Θ +β_0 Θ^2+H_ rescos k_* Q , which is governing the dynamics of the resonant DOF. Notice that there is additional “gauge" freedom ofαas one modified the definition ofΘby a constant: Θ→Θ + c, α_0 →α_0 +2 β_0 c . As a result, in practical implementations, we have chosen the minimalΘof a trajectory to be zero to fix this gauge freedom. In this case, we haveℋ_ eff = H_ res(Θ=0)if the minimum locates atQ=0(ℋ_ eff = -H_ res(Θ=0)if the minimum locates atQ=±π), and we empirically find that the dependence ofH_ resonΘis weak (see Sec. <ref>). Notice that in general we find the scaling ofH_ resto beϵ^1, the scaling ofΘ, α_0to be√(ϵ)and the scaling ofβ_0to be one. If we rescale the variables using theirϵscaling and adopt a “slow-time"τ̃ =√(ϵ)τ, the resulting equations forQ,Θwill beϵ-free. In particular, the dynamical timescale for completing a “cycle" in theQ-Θplane is𝒪(|ϵ|^-1/2)times the EMRI orbital timescale: t_ res∼𝒪 (1/ω√(H_ res) ) , whereωis the orbital angular frequency. §.§ Resonance Crossing The effective Hamiltonian described by Eq. (<ref>) determines the resonant dynamics as the EMRI system is trapped in one of the resonant islands. This description will be explicitly verified by the numerical evolution of the system discussed in Sec. <ref>. The form of the effective Hamiltonian, on the other hand, is very similar to the ones used to describe mean-motion resonances for multi-body planetary systems discussed in <cit.>. One difference is thatH_ reshere does not necessarily follow the√(Θ)dependence usually assumed for planetary systems. Nevertheless, the form of the effective Hamiltonian naturally gives rise to two separate regimes in the phase space: the libration and the rotation regime as shown in Fig. <ref> in Sec. <ref>. Givenα_0, β_0andH_ res, the system follows different trajectories in theQ-Θphase space depending on the value ofℋ_ eff. Sinceℋ_ eff = H_ res(Θ=0)and theH_ resdependence onΘis weak, we have ℋ_ eff = H_ res =β_0 ( Θ+α_0/2β_0 )^2+H_ rescos Q -α^2_0/4 β_0 so that 2H_ res≥ H_ res(1-cos Q) ≥ -α^2_0/4 β_0 . The critical transition from a rotation orbit to a libration orbit happens when H_ res = -α_0^2/8β_0 , The equality marks the condition for the critical trajectory entering/exiting a resonant island:α_c =± 2√(2)√(|β_0 H_ res|)∝√(ϵ). This transition may either be enabled by a parametric change of the effective Hamiltonian, or by an additional dissipative effect such as gravitational radiation reaction. Let us comment on these two effects individually. First, let us focus on the parametric change of the effective Hamiltonian. With the radiation reaction turned on, the evolution equation for𝒥̃_αis modified d 𝒥̃_α/d τ =-iϵ N_α∑_k ∈ℤ k H_k N_j e^i k N_j q^j+q G_α , whereG_α({𝒥_β})is related to the orbit-averaged fluxes at infinity, which is approximately constant during the time of resonance. In addition, in practical implementations, we find that thek=± 1, ± 2harmonics usually dominate, and the∂ H_N_j/∂𝒥term in Eq. (<ref>) is much smaller than theΔΩterm. As a result, the equations of motion may be simplified to (hereafter we focus on thek_*=1case, where the generalization for otherk_*is straightforward) d Q/d τ =∑_α a^α𝒥̃_α +b , d 𝒥̃_α/d τ = - N_α H_ ressin Q + q G_α , where∑_α a^α N_α =β_0andbis a constant. The parametric change of the effective Hamiltonian comes from the time-dependent shift of𝒥_βwithN_β =0. For ar-θresonance (as discussed in Sec. <ref>),N_ϕ, N_tare zero so the parametric shift is induced by𝒥_t, 𝒥_ϕ(the energy and angular momentum). SinceG_t, G_ϕinduce the time-dependent modulation of the conserved quantity𝒥_t, 𝒥_ϕ, the value ofα_0, β_0, H_ reswill in turn shift in time as they are all functions of𝒥_t, 𝒥_ϕ. Such shifts can be viewed as a parametric change of the effective Hamiltonian. If the parametric shift timescale through the resonance regime (𝒪(1)change of resonant angleQdue to radiation reaction)t_ rr∝ 1/√(ΔΩ̇)∝ω^-1/√(q)is longer thant_ res, the change is adiabatic such that the action of the effective Hamiltonian is invariant, i.e., ℐ =∫Θ d Q is constant. According to the phase-space analysis in <cit.>, the adiabatic capture into a resonance is only possible for “converging" evolution, i.e., the ratio of two frequencies evolves towards one or equivalentlyα(assumingβis negative) evolves from positive values to negative values. In ther-θresonance considered in Sec. <ref>,αgenerally evolves from negative values to positive values with the corresponding frequency ratio differing more in time. Using the notation of the rotation numberν, under the influence of radiation reaction, we find thatνin general decreases across resonances, e.g., evolving from>2/3to<2/3across the2/3resonance. As a result, adiabatic parametric resonance capture is unlikely. This argument should apply for generic choice ofh. The second transition mechanism comes from the evolution of𝒥_r, 𝒥_θin Eq. (<ref>). The equation of motion is equivalent to one-dimensional motion with total effective energy 1/2 ( d Q/d τ )^2+β_0 H_ rescos Q +q ∑_α a_α G_α Q =K . As shown in <cit.>, this type of evolution will not lead to the capture into a resonant island unless higher order radiation reaction is considered, in which case the capture condition is still extremely fine-tuned. In summary, the dissipation in𝒥_r, 𝒥_θalso does not lead to resonance trapping in the adiabatic regime. Once the adiabatic approximation breaks down, it is possible to have trajectories across the resonant islands. The island crossing time can be estimated as t_ cross∼α_c/α̇_̇0̇∝√(H_ res)/q ω whereωis the orbital frequency,α̇_̇0̇∝ qis a parametric shift ofα_0driven by the radiation reaction. Therefore, the adiabatic approximation breaks down ift_ cross < t_ res, orH_ res < q(apart from numerical coefficients). Notice that we writeH_ resinstead ofhhere because the harmonics amplitudeH_ rescan be much smaller than the perturbation amplitudeh. This point is also seen in the numerical example studied in Sec. <ref>. To summarize: in the adiabatic limit, radiation dissipation does not drive a near-resonance orbit onto resonance due to the existence of the adiabatic invariantℐ. It seems no obvious pathway for a near-resonance orbit to cross the resonance in this limit. But as we will see later, the effective (1-DOF) Hamiltonian description we have been using breaks down for chaotic transitional orbits, which are possible to jump from one branch to the other branch with opposite sign ofdQ/dτeven without radiation dissipation. Therefore, chaotic transitional orbits are the pathway of crossing resonances without crossing the resonance islands in the adiabatic limit, while crossing the resonance islands is a more general pathway for crossing resonances when the adiabatic approximation breaks down. §.§ The Emergence of Chaos It is interesting to discuss the emergence of chaos in the Hamiltonian point of view, especially outside the resonant islands. According to the derivation in Sec. <ref>, an Near Identity Transformation can be applied to remove all the fast-varying degrees of freedom, so that the resonant effective Hamiltonian only has one degree of freedom left. If this description is complete, then no chaos should appear for the resonant degree of freedom, which clearly contradicts the numerical observations. One likely explanation is that when we perform the Near Identity Transformation in Eq. <ref>, <ref> and <ref>, we have neglected terms with higher order than𝒪(ϵ). These terms may not necessarily come from nonlinearh. For example, let us consider a non-resonant angleQ+q_θthat may appear in some of the terms in Eq. <ref>. In the process of Near Identity Transformation we have included the time dependence ofq_θasΩ_θτ, but have neglected the time dependence ofQbecause it evolves on the slow time∝√(ϵ). This assumption is valid for trajectories within the resonant islands, where the value ofQis bounded in a region less than2π. However, outside the resonant islands whenQstarts to rotate, this part of phase produces secular phase errors for evolution longer than1/√(ϵ)times the orbital timescale. In Chapter 9.5 of <cit.>, it is shown that if additional time-dependent terms is added to the Hamiltonian of a simple pendulum (similar to the effective Hamiltonian here): ℋ_ eff→ℋ_ eff+H_ non-res∑_k cos (Q+k ωτ) whereH_ non-res, ωare constants, chaos appears. The emergence of chaos is studied using the “Standard Map" method in <cit.>. Notice these additional terms look at the non-resonant terms we remove using the Near Identity Transformation. Therefore it is plausible that the interplay between the non-resonant terms and the resonant terms in Eq. <ref> gives rise to the chaotic region outside the resonant islands. In other words, it is beneficial to keep both resonant and no-resonant terms in modelling the trajectories in the chaotic regime, i.e., usingq^α, 𝒥^α. On the other hand, although the single DOF effective Hamiltonian cannot fully describe the transitional chaotic orbits, it is still useful in this regime because the non-chaotic trajectories predicted by the effective Hamiltonian can be still viewed as temporary orbits in the chaotic zone, especially in the region away from the bifurcation points. The transitional chaotic orbits may be viewed as collections of non-chaotic orbits in theQ-Θplane with occasional transition from one to the other. § NUMERICAL EVOLUTION In this Section, we will use the spinning BH solution in quadratic gravity <cit.> as an example perturbed Kerr spacetime for clarifying the resonance crossing process. This spacetime has been numerically studied before <cit.> which is convenient for us to validate and compare the part of our results in physical coordinates. While being obtained from a specific quadratic gravity theory, the results presented in this Section should apply to various perturbed Kerr metrics in general. §.§ Quadratic gravity For the purpose of numerical evolution, we have focused on a particular quadratic gravity theory, namely Einstein-scalar-Gauss-Bonnet (EsGB) gravity. Quadratic gravity usually arises as an effective field theory for the low energy limit of some quantum gravity theories <cit.>. In addition, EsGB is the only theory quadratic in the curvature that leads to second-order field equations for any coupling, ensuring that the theory is ghost-free <cit.>. It introduces an additional scalar fieldϑcoupled to the metricg_μνthrough the action S = ∫d^4x √(-g)[κ R + α_GB ϑ 𝒢 - 1/2(∇_μϑ ∇^μϑ + 2V(ϑ)) + ℒ_ mat], whereκ=(16π G)^-1,gis the determinant of the metric,Ris the Ricci scalar,𝒢≡ R^2 - 4 R_μνR^μν + R_μνρσR^μνρσis the Gauss-Bonnet invariant,V(ϑ)is a potential that will be set to zero in the following andℒ_ matis the Lagrangian density describing the matter fields. The coupling constantα_GBhas the dimension of[length]^2, so we also define the following dimensionless constant ζ≡α_GB^2/κ M^4, whereMis the mass of the black hole. The set of field equations for the theory are G_μν + α_GB/κ𝒟_μν^(ϑ) = 1/2κ(T_μν^(mat) + T_μν^(ϑ)) , □ϑ = α_GB 𝒢 , whereT_μν^(mat)andT_μν^(ϑ)are respectively the matter and scalar field stress-energy tensors with T_μν^(ϑ) = 1/2g_μν(∇_ρϑ ∇^ρϑ - 2V(ϑ)) - ∇_μϑ ∇_νϑ , and 𝒟_μν^(ϑ) = -2 R∇_μ∇_νϑ + 2(g_μνR-2R_μν)∇_ρϑ ∇^ρϑ +8R_ρ(μ∇_ν)∇^ρϑ - 4g_μνR^ρσ∇_ρ∇_σϑ +4 R_μνρσ∇^ρ∇^σϑ . This theory has been extensively studied in the past and solutions have been obtained both numerically <cit.> and analytically <cit.>. Spherically symmetric solutions are known to lead to integrable orbits, so here we will only focus on stationary and axisymmetric solutions. As no exact closed-form solutions are known for rotating BHs, analytical solutions are obtained as an expansion in the small coupling and for small rotation. This is the path we will follow in the following. These metrics were obtained following two independent approximation schemes, one for small couplingζ≪ 1and the other one for small spinχ≡a/M≪ 1, withathe dimensional spin parameter. Then, the solution was resummed in order to interpret the solution as a perturbation of the Kerr metric <cit.> g_μν = g_μν^(Kerr) + ∑_l,mζ'^l α'^m δ g_μν^(l,m) , whereα'andζ'are bookkeeping parameters andg_μν^(Kerr)is the usual Kerr metric in Boyer-Lindquist coordinates(t, r, θ, ϕ)g_tt^(Kerr) = -(1-2Mr/Σ), g_tϕ^(Kerr) = -2Marsin^2θ/Σ, g_rr^(Kerr) = Σ/Δ, g_θθ^(Kerr) = Σ, g_ϕϕ^(Kerr) = (r^2+a^2+2Ma^2rsin^2θ/Σ)sin^2θ , whereΣ≡ r^2+a^2cos^2θandΔ≡ r^2-2Mr+a^2. As we treat quadratic gravity as an effective field theory, we will only need the linear order in the couplingl=1and consider the set of perturbationsδ g_μν^(1,m). As an example, we give the explicit expressions of the perturbation up to linear order in spin: δ g^(1,0)_tt = -ζ M^3/3r^2[1+26M/r+66M^2/5r^2+96M^3/5r^3+80M^4/r^4], δ g^(1,0)_rr = -ζ M^2/f^2r^2[ 1+m/r+52m^2/3r^2+2M^3/r^3+16m^4/5r^4-368m^5/3r^5], δ g^(1,1)_tϕ = 3/5ζ Mχ M^3sin^2θ/r^3[ 1+140M/9r+10M^2/r^2. . +16M^3/r^3-400M^4/9r^4] , withf≡ 1-2M/rand all other components vanish. In the following, we will use the solution up to quintic order in spin but as the expressions are quite lengthy, we refer to Ref. <cit.> for the full expressions. §.§ Evolution in physical coordinates (x^μ, p_μ) We consider the spinning BH solution of the quadratic gravity as an example of perturbed Kerr spacetime, where the perturbed spacetime is still stationary and axisymmetric. For a test particle moving in the perturbed Kerr spacetime along a geodesic, the total HamiltonianH, the energyEand the angular momentumLare still conserved, but the Carter constantCis not. In this work, we choose a perturbed Kerr spacetime with BH spina=0.2and the quadratic gravity coefficiencyζ=0.002as a fiducial example. To get an intuition for the chaotic behavior of near-resonance orbits in a perturbed Kerr spacetime, we consider a series of geodesics with total HamiltonianH=-1/2 M^2(or equivalently the particle rest massμ=1), energyE=0.96 M, angular momentumL=3.5 M^2, and initial condition(r, θ, p_r)_ ini=(r_ ini, π/2, 0)—(p_θ)_ iniis then automatically determined and solved for numerically. Given the initial conditions, we evolve the system according to the Hamiltonian equations ẋ^A = ∂ H(x^μ, p_μ)/∂ p_A , ṗ_A = -∂ H(x^μ, p_μ)/∂ x^A fromτ_ ini=0toτ_ end=6×10^7 M, with the indexArunning over(r,θ). For each orbit, we calculate the rotation number ν : = ⟨Ω^r|⟩/⟨Ω^θ|⟩ , which is the ratio of average frequency in therdirection and that in theθdirection. Numerically it can be computed asν≈ N_ṙ=0/N_θ=π/2, whereN_ṙ=0is the number of times the orbit passes the pericenter and apocenter, andN_θ=π/2is the number of times the orbit passes the equator in the same time interval. In the top left panel of Fig. <ref>, we show the rotation numbers of the orbits (labelled byr_ ini) that are close to the2/3resonance. The near-resonance orbits can be classified into three categories: orbits in the rotation regime (blue horizontal line), chaotic transitional orbits (orange line) and orbits in the libration regime (green line). Similar features in the rotation curve have been found in previous studies on various non-Kerr metrics (see e.g., <cit.>). These features (including both the plateau and jumps in the rotation curve) were believed to be the signatures of chaos. However, as we will show later, only the steep part of the rotation curve (orange line) correspond to chaotic orbits, while the remaining parts correspond to non-chaotic orbits: either on-resonance non-chaotic orbits (the plateau part, blue line) or the near-resonance non-chaotic orbits (the mildly changing part, green line). To better understand the orbits in each category, we also plot the Poincaré map(r, p_r)_θ=π/2of three representative orbitsO_1/2/3withr_ ini=(7.2156, 7.21566875, 7.215675) Min Fig. <ref>. In the upper right/ middle left/ middle right panels, we show the Poincaré maps of these three orbitsO_1,2,3, whereO_1andO_2are similar simple closed curves, whileO_3consists of 3 disconnected pieces. In the lower left panel, we show a zoomed-in version of theO_2map aroundr=8 M. An interesting feature is that the curve turns back aroundr=8.15 Matτ≈4×10^7 M, i.e., the map(r, p_r)moves in the anti-clockwise direction before this time and in the clockwise direction afterwards. This turning back behavior is a key signature of transitional chaotic orbits, which allows orbits to switch from one branch (i.e., the “anti-clockwise" branch) to the other (i.e., the “clockwise" branch) without crossing the resonant islands. This transition is further discussed using the language of action-angle variables in Sec. <ref> (see Fig. <ref>). In the lower right panel, we show a similarly zoomed-in version of theO_3map aroundr=r_ ini, where the map is in fact an “island" instead of a simple curve. Though we have seen rich phenomena of near-resonance EMRI dynamics in the physical coordinates, no simple picture or unified description of these phenomena is available. In order to obtain a systematic and quantitative description of the underlying physics, working with action-angle variables is generally useful. §.§ Evolution in action-angle variables (𝒥_α, q^α) and in resonant variables (Θ, Q) In Sec. <ref>, we have presented an (effective) Hamiltonian description of the resonant DOF in the resonance regime. In going from the full Hamiltonian to the effective Hamiltonian, all the non-resonant degrees of freedom have been removed by applying a NIT. This is valid because the non-resonant degrees of freedom act as fluctuations to the resonant dynamics, which evolves on the “slow-time":τ̃ =√(ϵ)τ. When the resonant motion is switched to the rotational regime, the interplay between resonant and non-resonant degrees of freedom gives rise to the transitional chaotic orbits that cannot be fully characterized by the effective Hamiltonian. Nevertheless, the Hamiltonian description offers a general mathematical framework to analyze the resonance phenomena. We shall transform the orbit information obtained in the previous section in the physical coordinates to the action-angle coordinates and numerically verify the formalism discussed in Sec. <ref>. The technical tools for making such transformations are explicitly given in Appendix <ref> and <ref>. To the best of our knowledge, this is also the first time that EMRI evolutions (in the perturbed Kerr spacetime without radiation reaction included) are shown with action-angle variables. We start by plotting the resonant angle and actions for the sequence of orbits shown in Fig. <ref> evolved using physical coordinates. In the left panel of Fig. <ref>, we show the evolution of the resonant angleQ(τ)and the conjugate momentumΘ(τ)of orbitO_1, whereQ(τ)monotonically increases, which is consistent withν < 2/3as Q̇ = -3 q̇̃̇^r+2 q̇̃̇^θ = -3⟨Ω^r|+⟩2⟨Ω^θ|=⟩2⟨Ω^θ|⟩(1-3/2ν) . The plot shows the resonant momentumΘobtained from𝒥̃_θand𝒥̃_r, the results are consistent with each other, as a direct verification for Eq. (<ref>). By interpolatingQ(τ), we numerically obtain the derivativeQ̇(τ)and showQ̇(Θ)in the right panel, which is almost a perfect straight line. As a consistency check, we can also infer the resonant angular velocityΔΩ(Θ)via the effective Hamiltonian. Writing the effective Hamiltonian as the kinetic energy and the potential energy H_ eff := H_ kin + H_ pot = ∫_0^ΘΔΩ(Θ) dΘ + ∑_k H_k𝐍 e^ikQ , it is straightforward to seeH_ eff=H_ pot(Θ=0), therefore we haveH_ kin=H_ pot(Θ=0)-H_ pot(Θ). We numerically fitH_ kin(Θ)withc_0+α_0Θ+β_0Θ^2, and obtain the linear relationΔΩ(Θ)=α_0+2β_0Θ, which perfectly matches the numerical results ofQ̇(Θ)as shown in the right panel of Fig. <ref>. Let us now look at the phase diagram for the resonant DOF. In the first panel of Fig. <ref>, we show the diagramJ_θ-Qof orbitO_1, which is quite noisy becauseJ_θcontains both resonant and non-resonant degrees of freedom. After the NIT, the non-resonant degrees of freedom are removed and we obtain the plots forJ̃_θ-Q, which clearly shows that the near-resonance orbitO_1is effectively a system of single DOF. In addition, we see that the orbitO_1is in the rotation regime because the resonant angleQcovers the full range of[0, 2π]. In the third panel, we show the kinetic partH_ kinof the effective Hamiltonian and a quadratic fitting to it. In the fourth panel, we show the numerical potential energyH_ pot, the fitted kinetic energyH_ kinand the total energyH_ eff=H_ kin+H_ pot, which is a constant as expected. Fig. <ref> is similar to Fig. <ref> except we consider the actionJ_rinstead ofJ_θ. In the left panel, we barely see any correlation between the actionJ_rand the resonant angleQbecause the non-resonant degrees of freedom dominates the variations ofJ_r. The curveΘ(J̃_r)is recovered via the NIT, which reduces the variations by more than one order of magnitude (see Appendix <ref> and <ref> for the details regarding the implementation of the NIT), though dispersion remains due to the presence of relatively large numerical errors in the NIT procedure. As a result, we find a similar dispersion in the kinetic energyH_ kin(Θ)and consequently the conservation of the effective HamiltonianH_ eff(τ)is not as good as in theJ_θcase. In the following analyses, we will use the higher-precisionΘ(J̃_θ). To illustrate chaotic orbits, we perform a similar analysis for orbitO_2and show the results in Figs. <ref> and <ref>. In the left panel of Fig. <ref>, we see that this orbit consists of two branches, aQincreasing branch (the “counter-clockwise" branch in Fig. <ref>) and aQdecreasing branch (the “clockwise" branch in Fig. <ref>), and a transition occurs aroundτ=4×10^7 M. This transition is also marked by a jump in momentumΘ, which in turn induces an extra phase shift in the waveform as we will discuss in Sec. <ref>. As shown in the right panel,ΔΩ(Θ)on the two different branches follows the same linear relation, which indicates the existence of a symmetry between the two branches. This symmetry is revealed more explicitly in the second panel of Fig. <ref>, where theΘ-Qdiagram consists of two symmetric branches that are connected by a sharp transition aroundQ=2π(the bifurcation point indicated by the arrow). The behavior of orbits in the libration regime (orbits that are on the plateau of the rotation curve in Fig. <ref>) is similar to a simple harmonic oscillator, with the resonant angleQlimited to a finite range|Δ Q| < 2π(see Figs. <ref> and <ref> for orbitO_3as an example). The orbitO_3lies on the edge of the plateau of the rotation curve in Fig. <ref>, whoseΘ-Qphase diagram is quite similar to that of orbitO_2, with|Δ Q|smaller than but close to2π. As moving towards the center of the plateau from the left edge, we find the orbital phase diagram becomes more confined in bothΘandQdirections. For the orbit exactly lying in the center, bothΘandQturn out to be constants, and this point is the so-called stable point. The on-plateau orbits on the right-hand side (of the stable point) are symmetric to their counterparts on the left-hand side, in the sense that their orbital phase diagramsQ-Θare the same. Orbits belonging to different categories and the symmetry explained above could be summarized with the effective Hamiltonianℋ_ eff = α_0 Θ +β_0 Θ^2+H_ rescos Q. For a given set of parameters(α_0, β_0, H_ res), the orbits in theQ-Θspace are simply contours of constant effective energy and the orbital behavior (libration, transition, or rotation) is largely determined by the effective energy. As an example, we take(α_0, β_0, H_ res)= (1.00×10^-6, -0.04, 3.03×10^-12)(the orbitO_3parameters as shown in Figs. <ref> and <ref>), the critical effective energy turns out to beℋ_ crit = -α_0^2/(4β_0)-H_ res=2.94× 10^-12(c.f. Eq. <ref>). We show different contour levels in Fig. <ref>. The low-energy contourℋ_ eff = 0.5× 10^-12 < ℋ_ critis in the rotation regime (e.g., orbitO_1). In fact, this orbit consists of two symmetric and disconnected branches, which correspond to orbitO_1(Q̇ > 0) and its symmetric counterpart (Q̇ < 0), respectively. The high-energy contourℋ_ eff = 4× 10^-12 > ℋ_ critis in the libration regime, and this contour corresponds to a pair of orbits that are of the same effective energy while opposite rotation directions (e.g., orbitO_3and its symmetric counterpart). The contour of even higher energyℋ_ eff = 8.5× 10^-12corresponds to orbits that are closer to the stable point. The contour with critical energy partially captures the orbital properties in the transition regime (e.g.,O_2). The limitation is that there is no chaos in this one-degree -of-freedom description, because the number of degrees of freedom is equal to the number of conserved quantity (the effective energy). In reality, we find the layers of chaos (as previous observed in various spacetimes <cit.>) lying between the rotational and libration orbits, thanks to the contribution from non-resonant terms. § IMPACT FOR EMRI EVOLUTION The long-term impact of resonance relies on the jump of action variablesΔ𝒥_α, as discussed in <cit.>. First, let us neglect the effect of chaotic transitional orbits, so that Eq. (<ref>) can be used to compute the shift of the action variables across a resonance. IntroducingH_β=β_0 H_ resandG = ∑_α a_α G_α, we can rewrite Eq. (<ref>) in a more convenient form 1/2 ( d Q/d τ )^2+ H_βcos Q +q G Q =K , or d Q/d τ= ±√(2)√(K-H_βcos Q -q G Q) where the sign depends on the branch the orbit belongs to (e.g., see Fig. <ref>). This equation determines theτdependence of the resonant angleQ. The additional kicks on𝒥̃_α, according to Eq. (<ref>), are given by Δ𝒥̃_α = -N_α H_ res∫^∞_-∞sin Q(τ) dτ whereτ=-∞is assumed to be sufficiently before the resonance andτ=∞is assumed to be sufficiently after the resonance. The above equation immediately suggests that the discontinuities of𝒥̃_αacross the resonance are proportional toN_α, a phenomenon also found in <cit.>. This point is important for incorporating resonance effects into EMRI waveforms as it demonstrates nicely that only one free parameter is needed to search for these resonances instead of multiple free parameters. In the non-adiabatic limit whereH_β≪ q G, the solution ofQ(τ)can be approximated asQ≈ Q_0+q G τ^2/2, for whichτ=0is set at the point thatdQ/dτ =0. As a result, we have Δ𝒥̃_α = -N_α H_ res√(π/q G)( cos Q_0 + sin Q_0 ) which is consistent with <cit.>. Notice that in this limit the trajectory may temporarily cross the resonant islands in the phase space because the resonance capture condition derived for the parametric change of the effective Hamiltonian (see <cit.>) requires adiabatic evolution. In general, we can rewrite Eq. (<ref>) as Δ𝒥̃_α = -N_α H_ res∫^∞_-∞sin Q(τ) dQ/dQ/dτ = -√(2) N_α H_ res∫^Q^*_-∞ d Qsin Q /√(K-H_βcos Q -q G Q) = -√(2) N_αH_ res/√(q G) f(K/qG, H_β/qG) , where atQ=Q^*we assume thatdQ/dτ=0, so thatK=H_βcos Q^* +q G Q^*. This function is shown in Fig. <ref> withH_β/qGfixed to three different values, and in Fig. <ref>K/qGfixed to three different values. In the adiabatic limitH_β≫ q G, we can approximatef ∼𝒪(1)(H_β/q G)^-1/2, so that the overallΔ𝒥̃_αis proportional to(H_β)^1/2. With the chaotic transitional orbits considered, the orbit may jump from one branch to the other branch before reaching the turning point (d Q/dτ =0), as shown in Fig. <ref>. In this case, the trajectory makes the transition between branches along with a jump inΘ:ΔΘ_ chaos = Δ𝒥̃_α, chaos/N_α. The corresponding turningQ^*in Eq. (<ref>) should be replaced by K= H_βcos Q^c + q G Q^c +(β_0 ΔΘ_ chaos)^2/8 so that the accumulated shift in𝒥̃_αbefore and after the transition is Δ𝒥̃_α = -√(2) N_α H_ res∫^Q^c_-∞ d Qsin Q /√(K-H_βcos Q -q G Q) +Δ𝒥̃_α, chaos =Δ𝒥̃^c_α+ Δ𝒥̃_α, chaos . Because the non-resonant terms and resonant terms are generated by the sameh, we expect that the width of the chaotic zone is a fraction of the size of the resonant island, which is proportional to√(H_ res). As a result, we expect thatΔ𝒥̃_α, chaos∝√(H_ res). The numerical value ofΔΘ_ chaosin Fig.  <ref> is approximately10%of the size of the critical resonant island in theΘdirection (also approximately true for the jump in the1/2resonance case shown in Fig. <ref>), which is given by|α_0/β_0|. This number may not be universal for resonances, because we find that even for the same system, the transitional with longer evolution tends can give rise to larger jumps. Nevertheless it can serve as a rough estimate for the impact of chaos. The size ofΔΘ_ chaosdetermines the upper limitQ^cin the evaluation forΔ𝒥^c_α. For numerical illustration purpose, in Fig. <ref> we have increasedΔΘ_ chaossuch that it is√(0.1)times|α_0/β_0|. We find thatΔ𝒥^c_αfollows a similar scaling law asΔ𝒥̃_αin Eq. (<ref>). As we decrease the magnitude ofΔΘ_ chaos, we expect that the coefficient of the scaling law will be closer to the ones shown in Fig. <ref>. Overall, the long-term phase shift due to resonance should be ΔΨ∼Δ𝒥/𝒥1/q where the1/qfactor comes from the1/qscaling of the radiation reaction timescale. Accordingly we conclude that in the non-adiabatic regime, the transient resonance crossing gives rise to ΔΨ∼𝒪(H_ res)/q^3/2 . In the adiabatic regime, the chaotic transitional orbits generally gives rise to a phase shift ΔΨ_ chaos∼𝒪(0.1)H_ res^1/2/q and the total induced-phase shift including the additional change of𝒥_αbefore and after reaching the transitional orbits should approximately scale as ΔΨ∼𝒪(H_ res^1/2)/q . § DISCUSSION AND CONCLUSION An EMRI system may be influenced by environmental gravitational perturbations, which can come from nearby astrophysical objects, dense dark matter distributions and/or exotic compact objects. Because of the large number of orbital cycles in band (10^4-10^5) for space-borne gravitational wave detectors, EMRIs will provide unprecedented opportunities to probe these environmental forces. To enable waveform building for upcoming gravitational wave experiments, it is necessary to comprehend the EMRI evolution and waveform on perturbed Kerr backgrounds. The work presented here represents one part of this program, in which resonance effects on the EMRI dynamics and its impact on the waveform is analyzed by introducing a resonance effective Hamiltonian. With this formalism, one can make connection with extensive literature on mean-motion resonance in planetary systems. Future studies along this direction will be interesting, especially in terms of understanding the transitional chaotic orbits from the Hamiltonian point of view. On the other hand, for realistic EMRI evolution, the system likely falls into the non-adiabatic regime for resonance crossing (except for relatively largeh), as discussed in Sec. <ref>. Since the numerical investigation in Sec. <ref> have not included gravitational radiation reaction, it will also be interesting to perform the numerical studies with radiation reaction included to verify the corresponding statements in Sec. <ref> and Sec. <ref>. The other part of the program, i.e. understanding the EMRI evolution in the non-resonance regime of a perturbed Kerr background, will be discussed in a separate work. By combining the results in these studies, one should be able to build an EMRI waveform model (on top of the EMRI waveform model on a Kerr background) for a generic Kerr perturbationh. This method is not only important for future space-borne gravitational wave detection, but likely also useful for constructing waveforms to test modified gravity theories using comparable mass-ratio binaries in the band of ground-based gravitational wave detectors. Indeed, for waveforms consistent with General Relativity, there is literature <cit.> pointing out that the EMRI-inspired waveform, with an appropriate rescaling using the mass-ratio parameter, agrees surprisingly well with the waveform obtained from numerical relativity for comparable mass-ratio systems. As the EMRI method does not require the post-Newtonian or post-Minkowskian expansion, it will serve as a promising route for generating high-precision waveforms for comparable mass-ratio binaries. It is also natural to expect that a modified gravity waveform produced with the EMRI approach may also inherit similar advantages. This application may be potentially important as presently we have limited waveform models going beyond the post-Newtonian approximation, partially because of the well-posedness issue in many modified theories of gravity. In this work, we have assumed axisymmetric for the metric perturbationh. Ifhis non-axisymmetric, the analysis in Sec. <ref> and Sec. <ref> should still apply except that we should include more general resonances with nonzeroN_ϕ, similar to the tidal resonance considered in <cit.>. Because of the expanded space of relevant resonances, it becomes more likely that during the evolution of an EMRI system, multiple resonances will be important. If the EMRI crosses the resonances in a non-adiabatic manner, from Eq. (<ref>) we see that there is a phaseQ_0that is related to the initial phase of the orbit. It will be interesting to investigate whetherQ_0of later resonances can still be determined accurately, with the earlier resonance jumps influencing the orbit on the radiation reaction timescale. We thank Mohammed Khalil for helpful discussions. Z. P. and H. Y. are supported by the Natural Sciences and Engineering Research Council of Canada and in part by Perimeter Institute for Theoretical Physics. L. B. is grateful for the hospitality of Perimeter Institute where part of this work was carried out. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. This research was also supported in part by the Simons Foundation through the Simons Foundation Emmy Noether Fellows Program at Perimeter Institute. L. B. also acknowledges financial support by the ANR PRoGRAM project, grant ANR-21-CE31-0003-001, and from the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement no. 101007855. § 1/2 RESONANCE In the main text, we have been focusing on orbits that are close to the2/3resonance, i.e.,ν≈ 2/3. In this appendix, we examine the orbits that are close to the1/2resonance, which share some common features with those2/3-resonance orbits, but also yield some differences. Similar to the2/3-resonance, we consider the perturbed Kerr spacetime with parameters(a, ζ)=(0.2, 0.002)and orbits with energy and angular momentum(H, E, L)=(-0.5, 0.96, 3.5)and initial conditions(θ, p_r)_ ini=(π/2, 0),r_ ini≈ 4.72 M. In the top left panel of Fig. <ref>, we show the rotation numbers of the orbits (labelled byr_ ini) that are close to the1/2resonance. Different from the2/3-resonance case, there is no plateau appearing in the rotation curve, i.e., there is no orbit in the libration regime. The remaining near-resonance orbits can be classified into two categories: orbits in the rotation regime (blue line) and chaotic transitional orbits (orange line). In the remaining three panels, we show the Poincaré map(r, p_r)_θ=π/2of orbitO_2and two zoomed-in versions. In the lower two panels, we see that the Poincaré maps consists of two branches, aQincreasing branch (the “counter-clockwise" branch in Fig. <ref>) and aQdecreasing branch (the “clockwise" branch in Fig. <ref>) with the transition occurring aroundτ=2×10^7 M. Similar to the2/3-resonance case, this transition is also marked by a jump in momentumΘ(see Figs. <ref> and <ref>). The major difference between the2/3-resonance orbits considered in the main text and the1/2-resonance orbits considered here is most easily understood from their Poincaré maps. The2/3-resonance orbits cross one of the islands and the island center, which is a stable point, while the1/2orbits avoid the islands by crossing the adjunction point between two islands, which is an unstable point. As a result, the1/2-resonance orbits pass the resonance without being trapped on the resonance, i.e., no orbit hasν=1/2. If we consider more general initial conditions for distinct sets of orbits, such as(p_r)_ ini≠ 0, we generally observe that the orbits will cross one of the islands, but without crossing the island center (the stable point) or the adjunction point between two islands (the unstable point). § 2/3 RESONANCE WITH STRONGER PERTURBATION In order to verify the√(ϵ)dependence of the size of the resonant regimes and the chaotic regime in the phase space, as suggested by the effective Hamiltonian formalism, we increaseζfrom0.002in Fig. <ref> toζ=0.02in Fig. <ref>. It is evident that the width of the plateau (proportional to the size of the resonant island) and the height of the segment near the plateau (proportional to the size of the chaotic regime) are both approximately increased by√(10). § MAPPING TO ACTION-ANGLE VARIABLES (Q, J) The necessary steps for mapping to action-angle variables(q, J)in Kerr spacetime has been investigated in detail in Refs. <cit.>, and we closely follow their discussions in this work. With this mapping in hand, in principle one can evolve the equation of motion in terms of action-angle variables [Eq. (<ref>)]. But as we will see later, the mapping is mostly done numerically, which is slow and not sufficiently accurate in the long-term evolution for pinning down the small near-resonance features. Therefore, we choose to perform the evolution in physical coordinates(x, p), and then mapped them to action-angle variables(q, J). In the Kerr spacetime, the energy and thez-component of angular momentum are E = -p_t , and L = p_ϕ . The Carter constant is <cit.> C = p_θ^2 + a^2cos^2θ(μ^2-p_t^2)+^2θ p_ϕ^2 , and the Hamiltonian is H = 1/2g^μνp_μ p_ν = Δ/2Σp_r^2 + 1/2Σ p_θ^2 +(p_ϕ+asin^2θ p_t)^2/2Σsin^2θ -[(r^2+a^2)p_t+ap_ϕ]^2/2ΣΔ . For mapping the physical coordinates to the action-angle variables, we need to invert the four equations above and write the momentump_μin terms of the integrals of motion as p_r = ±√(V_r(r))/Δ , p_θ = ±√(V_θ(θ)) , p_t = -E , p_ϕ = L , where the two potentials are V_r(r) = [(r^2+a^2)E-aL]^2-Δ[μ^2r^2+(L-aE)^2+C] , V_θ(θ) = C-[(μ^2-E^2)a^2+L^2/sin^2θ]cos^2θ . Given the integrals of motionP_α=(H, E, L, C), one can obtain the pericenter/apocenter separationsr_-/+as the two largest roots to equationV_r(r)=0, and obtain the minimum/maximum polar anglesθ_-/+as the roots to equation equationV_θ(θ)=0satisfying the constraintcos^2θ∈ [0, 1]. In terms of commonly used orbital parameters, eccentricityeand semi-latus rectump,r_±=p/(1∓ e). The actions are defined as J_r = 1/2π∮ p_r dr , J_θ =1/2π∮ p_θ dθ , J_t = 1/2π∫_0^2π p_t dt = -E , J_ϕ =1/2π∮ p_ϕ dϕ = L , and the numerical integration above is straightforward with the momenta defined in Eq. (<ref>). To obtain the angle variables, we need the canonical transformation from the physical coordinates(x^μ, p_ν)to(q^α, J_β)associated with a general solution to the Hamilton-Jacobi equation H(x^μ, ∂𝒮/∂ x^μ) + ∂𝒮/∂τ = 0 . Since the Hamiltonian does not involve timeτexplicitly, the Hamilton's principle function𝒮can be written in the form 𝒮 = γτ + 𝒲(x^μ, P_α) , whereγ=μ^2/2and𝒲(x^μ, P_α)is called Hamilton's characteristic function. As shown by Carter <cit.>, 𝒲(x^μ, P_α) = - Et +Lϕ±𝒲_r(r) ±𝒲_θ(θ) , where 𝒲_r(r) = ∫^r dr √(V_r)/Δ , 𝒲_θ(θ) = ∫^θ dθ√(V_θ) . Taking𝒲(x^μ, J_β)as a generating function that generates a canonical transform from the physical coordinates(x^μ, p_ν)to(q^α, J_β)(Chapter 9 of <cit.>), we have p_ν =∂𝒲/∂ x^ν(x^μ, J_β) , q^α = ∂𝒲/∂ J_α(x^μ, J_β) = ∂𝒲/∂ P_β∂ P_β/∂ J_α . There is some sign uncertainty in the definition of𝒲(x^μ, P_α)[Eq. (<ref>)], which is fixed by the first line in Eq. (<ref>), i.e., we fix the sign uncertainty with 𝒲 = - Et +Lϕ + sgn(p_r) 𝒲_r(r) + sgn (p_θ) 𝒲_θ(θ) , in calculatingq^αfrom the second line in Eq. (<ref>). Then the calculation ofq^αconsists of three major components:∂ P_β/∂ J_αand∂𝒲_r,θ/∂ P_α. §.§ ∂ J_β/∂ P_α and ∂ P_β/∂ J_α The analytic expressions of the partial derivatives∂ J_β/∂ P_αhad been detailed in <cit.>, and we repeat the non-trivial components for convenience here: ∂ J_r/∂ H = Y(r_-,r_+)/π , ∂ J_r/∂ E = W(r_-,r_+)/π , ∂ J_r/∂ L = -Z(r_-,r_+)/π , ∂ J_r/∂ C = -X(r_-,r_+)/2π , and ∂ J_θ/∂ H = -2√(z_+)a^2/πβ[K(k)-E(k)] , ∂ J_θ/∂ E = -2√(z_+)Ea^2/πβ[K(k)-E(k)] , ∂ J_θ/∂ L = -2L/πβ√(z_+)[K(k)-Π(z_-,k)] , ∂ J_θ/∂ C = -1/πβ√(z_+)K(k) , whereW, X, Y, Zare defined as <cit.> W(r_-,r_+) = ∫_r_-^r_+r^2E(r^2+a^2)-2Mra(L-aE)/Δ√(V_r) dr , X(r_-,r_+) = ∫_r_-^r_+1/√(V_r) dr , Y(r_-,r_+) = ∫_r_-^r_+r^2/√(V_r) dr , Z(r_-,r_+) = ∫_r_-^r_+r[Lr-2M(L-aE)]/Δ√(V_r) dr . HereK(k)is the complete elliptic integral of the first kind,E(k)is the complete elliptic integral of the second kind , andΠ(n,k)is the Legendre elliptic integral of the third kind: K(k) = ∫_0^π/2dψ/√(1-k^2sin^2ψ) , E(k) = ∫_0^π/2 dψ√(1-k^2sin^2ψ) , Π(n,k) = ∫_0^π/2dψ/(1-nsin^2ψ)√(1-k^2sin^2ψ) dψ , whereβ=a√(μ^2-E^2),k=√(z_-/z_+), withz=cos^2θandz_±are the two roots toV_θ(z)=0with0< z_-< 1 <z_+. With∂ J_β/∂ P_α, its inverse matrix∂ P_β/∂ J_αcan be obtained either numerically or analytically, where the analytic formula of the angular frequencies Ω^α := ∂ H/∂ J_α have been derived in <cit.> and are Ω^t = K(k)W(r_-,r_+)+a^2z_+E[K(k)-E(k)]X(r_-,r_+)/K(k)Y(r_-,r_+)+a^2z_+[K(k)-E(k)]X(r_-,r_+) , Ω^r = π K(k)/K(k)Y(r_-,r_+)+a^2z_+[K(k)-E(k)]X(r_-,r_+) , Ω^θ = πβ√(z_+)X(r_-,r_+)/2/K(k)Y(r_-,r_+)+a^2z_+[K(k)-E(k)]X(r_-,r_+) , Ω^ϕ = K(k)Z(r_-,r_+)+L[Π(z_-,k)-K(k)]X(r_-,r_+)/K(k)Y(r_-,r_+)+a^2z_+[K(k)-E(k)]X(r_-,r_+) . §.§ ∂𝒲_r/∂ P_α and ∂𝒲_θ/∂ P_α Comparing the definition ofJ_rwith that of𝒲_r, it is evident that∂𝒲_r/∂ P_αis similar to∂ J_r/∂ P_α: ∂𝒲_r/∂ H = Y(r_-,r)/π , ∂𝒲_r/∂ E = W(r_-,r)/π , ∂𝒲_r/∂ L = -Z(r_-,r)/π , ∂𝒲_r/∂ C = -X(r_-,r)/2π . In a similar way, comparing the definition ofJ_θwith that of𝒲_θ, we have ∂𝒲_θ/∂ H = -√(z_+)a^2/πβ[K(ϕ, k)-E(ϕ, k)] , ∂𝒲_θ/∂ E = -√(z_+)Ea^2/πβ[K(ϕ, k)-E(ϕ, k)] , ∂𝒲_θ/∂ L = -L/πβ√(z_+)[K(ϕ, k)-Π(ϕ,z_-,k)] , ∂𝒲_θ/∂ C = -1/2πβ√(z_+)K(ϕ, k) , where K(ϕ, k) = ∫_ϕ^π/2dψ/√(1-k^2sin^2ψ) , E(ϕ, k) = ∫_ϕ^π/2 dψ√(1-k^2sin^2ψ) , Π(ϕ, n,k) = ∫_ϕ^π/2dψ/(1-nsin^2ψ)√(1-k^2sin^2ψ) , andϕ = ϕ(θ)= arcsin(cos(θ)/√(z_-)), i.e.,ϕ(θ=π/2)=0andϕ(θ=θ_-)=π/2. Note that the lower integration limits in𝒲_r,θ[Eq. (<ref>)] are undefined, as a result, the mappingq^α(x^μ, p_ν)can only be determined up to some constant for a givenP_βor equivalentlyJ_α. In this work, we have fixed the lower integration limits tor_-andθ_-, respectively, i.e., we have fixed the mapping freedom by choosingq^r(r=r_-)=0andq^θ(θ=θ_-)=0. With all the equations summarized above, we can numerically obtain the mappingq^α(x^μ, p_ν)andJ_α(x^μ, p_ν). § NEAR IDENTITY TRANSFORMATION (Q, J)→ (Q̃, J̃) In our 2-DOF case, the Near Identity Transformation [Eq. (<ref>)] is formulated as q̃^α = q^α+ ϵ∑_n_r, n_θ∈ Ri/n_r Ω^r+n_θΩ^θ∂ H_n_r, n_θ/∂J_α e^i (n_r q^r+n_θ q^θ) , J̃_α = J_α +ϵ∑_n_r, n_θ∈ Rn_α/n_r Ω^r+n_θΩ^θ H_n_r, n_θe^i (n_r q^r+n_θ q^θ) , where{n_r, n_θ}is defined as the set of all non-resonant integers. The Near Identity Transformation involves operations on different Fourier componentsH_n_r, n_θof the perturbed HamiltonianH_ int(q^α, J_β)and its derivatives∂ H_n_r, n_θ/∂ J_α. As a first step, we need to write the perturbed Hamiltonian in terms of action-angle variables H_ int(x^μ, p_ν)=1/2 h^μν(x^A) p_μ p_ν→ H_ int(q^α, J_β) , i.e., we need to determine the inverse mappingx^A(q^α; J_β)andp_A(q^α; J_β), whereA∈{r, θ}. Making use of the symplectic relations between two sets of canonical variables and the chain rule, we obtain ∂ x^μ/∂ q^α = ∂ J_α/∂ p_μ = ∂ J_α/∂ P_β∂ P_β/∂ p_μ , ∂ p_μ/∂ q^α = -∂ J_α/∂ x^μ =-∂ J_α/∂ P_β∂ P_β/∂ x^μ . The above equations show thatr, θandp_r,θexplicitly depend onq^r,θonly, e.g., ∂ r/∂ q^r = ∂ J_r/∂ P_β∂ P_β/∂ p_r , ∂ r/∂ q^θ = ∂ J_θ/∂ P_β∂ P_β/∂ p_r and similar forθandp_r,θ. Note that the derivative∂ P_β/∂ p_rcalculation is somewhat subtle, e.g.,∂ C/∂ p_rwould be zero if we treat the factorμ^2in Eq. (<ref>) as a constant. In fact, we should replaceμ^2with-2H(x,p), i.e.,∂ C/∂ p_r =a^2cos^2θ× (-2∂ H(x,p)/∂ p_r) . Given initial conditions(r, θ, p_r, p_θ)|_(q^r, q^θ)=(0,0), one can evolve the above equations and numerically obtain(r, θ, p_r, p_θ)for any(q^r, q^θ)∈ [0,2π]×[0, 2π]. As a check of the accuracy of the numerical algorithm used in solving Eqs. (<ref>), one can use the fact that(r, θ, p_r, p_θ)are periodic functions of(q^r, q^θ)with period2π. Consistent with the mapping from(x, p)to(q, J)explained in the previous section, where we setq^r(r=r_-)=0andq^θ(θ=θ_-)=0, we choose the following initial conditions (r, θ, p_r, p_θ)|_(q^r, q^θ)=(0,0) = (r_-, θ_-, 0, 0) . As an example, we consider a Kerr BH with spina=0.2and an orbit withP_α = (H, E, L, C) = (-0.5, 0.96, 3.5, 1.552), i.e., eccentricitye=0.364, semi-latus rectump=9.821, andcos^2(θ_-)=0.112. The inverse mapping is shown in Fig. <ref>. With the inverse mapping, it is straightforward to numerically obtain the perturbed HamiltonianH_ int(q^r, q^θ;J_α), which can be decomposed as H_ int(q^r, q^θ;J_α) = ∑_n_r, n_θ H_n_r, n_θ(J_α) e^i(n_r q^r+ n_θ q^θ) . Its derivatives can be numerically calculated as ∂ H_n_r, n_θ/∂ J_α= ∂ H_n_r, n_θ/∂ P_β∂ P_β/∂ J_α , when no analytic expression of∂ H_n_r, n_θ/∂ P_βis available.
http://arxiv.org/abs/2306.07845v1
20230613152344
Adversarial Capsule Networks for Romanian Satire Detection and Sentiment Analysis
[ "Sebastian-Vasile Echim", "Răzvan-Alexandru Smădu", "Andrei-Marius Avram", "Dumitru-Clementin Cercel", "Florin Pop" ]
cs.CL
[ "cs.CL" ]
Faculty of Automatic Control and Computers, University Politehnica of Bucharest National Institute for Research and Development in Informatics - ICI Bucharest, Romania [email protected] {razvan.smadu,andrei_marius.avram}@stud.acs.upb.ro {dumitru.cercel,florin.pop}@upb.ro S.-V. Echim et al. Adversarial Capsule Networks for Romanian Satire Detection and Sentiment Analysis Sebastian-Vasile Echim1 Răzvan-Alexandru Smădu1 Andrei-Marius Avram1 Dumitru-Clementin Cercel1 Florin Pop1,2 July 31, 2023 ================================================================================================================= Satire detection and sentiment analysis are intensively explored natural language processing (NLP) tasks that study the identification of the satirical tone from texts and extracting sentiments in relationship with their targets. In languages with fewer research resources, an alternative is to produce artificial examples based on character-level adversarial processes to overcome dataset size limitations. Such samples are proven to act as a regularization method, thus improving the robustness of models. In this work, we improve the well-known NLP models (i.e., Convolutional Neural Networks, Long Short-Term Memory (LSTM), Bidirectional LSTM, Gated Recurrent Units (GRUs), and Bidirectional GRUs) with adversarial training and capsule networks. The fine-tuned models are used for satire detection and sentiment analysis tasks in the Romanian language. The proposed framework outperforms the existing methods for the two tasks, achieving up to 99.08% accuracy, thus confirming the improvements added by the capsule layers and the adversarial training in NLP approaches. § INTRODUCTION Satirical news is a type of entertainment that employ satire to criticize and ridicule, in a humorous way, the key figures from society, socio-political points, or notable events <cit.>. Although it does not aim to misinform, it mimics the style of regular news. Therefore, it has a sizeable deceptive potential, driven by the current increase in social media consumption and the higher rates of distrust in official news streams <cit.>. Furthermore, sentiment analysis is regarded as a successful task in determining the opinions and feelings of people, especially in online shops where customer feedback analysis can lead to better customer service <cit.>. Limited resources in languages such as Romanian make it challenging to develop large-scale machine learning systems since the largest datasets present up to tens of thousands of examples <cit.>. Therefore, various techniques should be proposed and investigated to address these challenges on such datasets. Adversarial training is an effective defense strategy to increase the robustness and generalization of the models intrinsically. Introduced by Szegedy et al. <cit.> and analyzed by Goodfellow et al. <cit.>, adversarial examples are augmented data points generated by applying a small perturbation to the input samples. It was initially employed in computer vision, where input images were altered with a small perturbation <cit.>. More recently, adversarial training gained popularity in NLP. The text input is a discrete signal; therefore, the perturbation is applied to the word embeddings in a continuous space <cit.>. The application of adversarial training in our experiments is motivated by the potential to improve the robustness and generalization of models with limited training resources. This paper aims to introduce robust high-performing networks employing adversarial training and capsule layers <cit.> for satire detection in a Romanian corpus of news articles <cit.> and sentiment analysis for a Romanian dataset <cit.>. Our experiments include training models suitable for NLP tasks as follows: Convolutional Neural Networks (CNNs) <cit.>, Gated Recurrent Units (GRUs) <cit.>, Bidirectional GRUs (BiGRUs), CNN-BiGRU, Long Short-Term Memory (LSTM) <cit.>, Bidirectional LSTM (BiLSTM), and CNN-BiLSTM. Starting from Zhao et al. <cit.>, we compare the networks against their adversarial capsule flavors. Next, the best-performing network is subjected to an in-depth analysis concerning the impact on the performance of the capsule model and the training with adversarial examples. Thus, we test the effect of capsule hyperparameters varying the number of primary and condensed capsules <cit.>. Also, we assess the performance of our model employing Romanian GPT-2 (RoGPT-2) <cit.> for data augmentation up to 10,000 text continuation examples. Finally, we discuss several misclassified test inputs for the sentiment analysis task. The main contributions in this work are as follows: (i) we thoroughly experiment with various configurations to assess the performances of the investigated approaches, namely adversarial augmentations and capsule layers; (ii) we show that the best-performing model uses BiGRU with capsule networks, while the most improvements were seen when incorporating RoGPT-2-based augmentations; (iii) we investigate the effects of analyzed components through t-SNE plots <cit.> and ablation studies; and (iv) we achieve state-of-the-art results on the two Romanian datasets. § RELATED WORK §.§ Capsule Networks in NLP Firstly presented by Sabour et al. <cit.>, the capsule neural networks are machine learning systems that model hierarchical relationships regarding object properties (such as pose, size, or texture) in an attempt to resemble the biological structure of neurons. Among other limitations, capsule networks are addressing the max pooling problem of the CNNs, which allows for translation invariance, making them vulnerable to adversarial attacks <cit.>. While it has been demonstrated that capsule networks are successful in image classification <cit.>, there is also a general preference for exploring their potential in NLP tasks, especially in text classification. Several works <cit.> took the lead in this topic, showing that using different approaches, such as static and dynamic routing, the capsule models provided competitive results on popular benchmarks. Several studies were performed in topic classification and sentiment analysis using capsule networks. Srivastava et al. <cit.> addressed the identification of aggression and other activities, such as hate speech and trolling, using a model based on the dynamic routing algorithm <cit.> involving LSTM as a feature extractor, two capsule layers (namely, a primary capsule layer and a convolutional capsule layer), and finally, the focal loss <cit.> to handle the class imbalance. The resulting model outperformed several robust baseline algorithms in terms of accuracy; however, a more complex data preprocessing was expected to improve the results further. For the sentiment analysis task, Zhang et al. <cit.> proposed CapsuleDAR, a capsule model successfully combined with the domain adaptation technique via correlation alignment <cit.> and semantic rules. The model architecture consisted of a base and a rule network. The base network employed a capsule network for sentiment prediction, consisting of several layers: embedding, convolutional, capsule, and classification. The rule network involved a rule capsule layer before the classification layer. Extensive experiments were conducted on review datasets from four product domains, which showed that the model achieved state-of-the-art results. Additionally, their ablation study showed that the accuracy decreased sharply when the capsule layers were removed. Su et al. <cit.> tackled limitations of Bidirectional Encoder Representations from Transformers (BERT) <cit.> and XLNet <cit.>, such as local context awareness constraints, by incorporating capsule networks. Their model considered an XLNet layer with 12 Transformer-XL blocks on top of which the capsule layer extracted space- and hierarchy-related features from the text sequence. Experiments illustrated that capsule layers provided improved results compared with XLNet, BERT, and other classical feature-based approaches. Moreover, Saha et al. <cit.> introduced a speech act classifier for microblog text posts based on capsule layers on top of BERT. The model took advantage of the joint optimization features of the BERT embeddings and the capsule layers to learn cumulative features related to speech acts. The proposed model outperformed the baseline models and showed the ability to understand subtle differences among tweets. §.§ Romanian NLP Tasks In recent years, several datasets have emerged aiming to improve the performance of the learning algorithms on Romanian NLP tasks. Apart from the two datasets used in this work, researchers have also introduced the Romanian Named Entity Corpus (RONEC) <cit.> for named entity recognition[A new version of RONEC is available at <https://github.com/dumitrescustefan/ronec>], the Moldavian and Romanian Dialectal Corpus (MOROCO) <cit.> for dialect and topic classification, the Legal Named Entity Recognition corpus (LegalNERo) <cit.> for legal named entity recognition, and the Romanian Semantic Textual Similarity dataset (RoSTS)[<https://github.com/dumitrescustefan/RO-STS>] for finding the semantic similarity between two sentences. Lately, the language model space for Romanian was also improved with the introduction of Romanian BERT (BERT-ro) <cit.>, RoGPT-2, ALR-BERT <cit.>, and DistilMulti-BERT <cit.>. In addition, all the results for these systems have been centralized in the Romanian Language Leaderboard (LiRo) <cit.>, a leaderboard similar to the General Language Understanding Evaluation (GLUE) benchmark <cit.> that tracks over ten Romanian NLP tasks. § DATASETS In this work, we rely on two of the most recent Romanian language text datasets: a corpus of news articles, henceforth called SaRoCo <cit.>, and one composed of positive and negative reviews crawled from a Romanian website, henceforth called LaRoSeDa <cit.>. §.§ Satirical News SaRoCo is one of the most comprehensive public corpora for satirical news detection, eclipsed only by an English corpus <cit.> with 185,029 news articles and a German one <cit.> with 329,862 news articles. SaRoCo includes 55,608 samples, of which 27,628 are satirical and 27,980 are non-satirical (or regular). Each sample consists of a title, a body, and a label. On average, an entire news article has 515.24 tokens for the body and 24.97 tokens for the title. The average number of sentences and words per sentence are 17 and 305, respectively. The labeling process is automated, as the news source only publishes satirical or regular content. §.§ Product Reviews LaRoSeDa is one of the largest corpora for sentiment analysis in the Romanian language. It was created based on the observation that the freely available Romanian language datasets were significantly reduced in size. This dataset totals 15,000 online store product reviews, either positive or negative, for which the ratings were also collected for labeling purposes. Thus, assuming that the ratings might reflect the polarity of the text, each review rated with one or two stars was considered negative. In contrast, the four or five-star labels were considered positive. The labeling process resulted in 7,500 positive reviews (235,474 words) and 7,500 negative reviews (304,813 words). The average number of sentences and words per review is 4 and 36, respectively. § METHODOLOGY The generic adversarial capsule network we employ is presented in Figure <ref>. It consists of a sub-module that can represent any widely-used NLP model, followed by capsule layers. Concretely, we use primary capsules and capsule flattening layers to facilitate the projection into condensed capsules passed as input for a routing mechanism to obtain the class probabilities. To increase robustness, we feed regular and adversarial samples into the model. In what follows, we detail the employed components. Word Embeddings. Each word is associated with a fixed-length numerical vector, allowing us to express semantic and syntactic relations, such as context, synonymy, and antonymy. Depending on the model, the embedding representation has various sizes. To use a continuous representation of the input data, we employ two different types of embeddings: BERT- and non-BERT-based. On the RoBERT model <cit.>, we rely on embeddings delivered by the model with a dimension E_d=768, whereas, for the non-BERT models, we abide by Onose et al. <cit.> in terms of distributed word representations and choose Contemporary Romanian Language (CoRoLa) <cit.> with an embedding dimension E_d=300, Nordic Language Processing Laboratory (NLPL) <cit.>, having the size E_d=100, and Common Crawl (CC) <cit.> with E_d=300. Adversarial Examples. To increase the robustness of our networks, we create adversarial examples by replacing characters in words. Using the letters of the Romanian alphabet, we randomly substitute one character per word, depending on the sentence size: one replacement for less than five words per sentence, two replacements for 5 to 20 words per sentence, and three replacements for more than 20 words per sentence. Primary Capsule Layer. This layer transforms the feature maps obtained by passing the input through the sub-module into groups of neurons to represent each element in the current layer, enabling the ability to preserve more information. By using 1 × 1 filters, we determine the capsule p_i from the projection p_ij of the feature maps <cit.>: p_i = squash(p_i1⊕ p_i2⊕···⊕ p_id) ∈ℝ^d where d is the primary capsule dimension, ⊕ is the concatenation operator, and squash(·) adds non-linearity in the model: squash(x) = x^2 /1 + x^2 x/x Compression Layer. Because it requires extensive computational resources in the routing process (i.e., the fully connected part of the capsule framework), we need to reduce the number of primary capsules. We follow the approach proposed by Zhao et al. <cit.>, which uses capsule compression to determine the input of the routing layer u_j. Each condensed capsule û_j represents a weighted sum over all the primary capsules: û_j = ∑_i b_i p_i ∈ℝ^d Routing Layer. It conveys the transition layer between the condensed capsules to the representation layer. It is denoted by a routing method to overcome the loss of information determined by a usual pooling method. In our capsule framework, we choose Dynamic Routing with three iterations <cit.>. Representation Layer. In the binary classification tasks, the last slice of our generic architecture is represented by the probability of a text input being satirical or regular for SaRoCo and positive or negative sentiment for LaRoSeDa. § EXPERIMENTAL SETUP §.§ Model Parameters Firstly, we use CoRoLa, CC featuring 300-dimensional, and NLPL with 100-dimensional state space vectors for reconstruction at the embeddings level. We choose n-gram kernels with three sizes (i.e., 3, 4, and 5) and 300 filters each for the CNN sub-module. Also, for the Capsule layers, we use N_pc=8 primary capsules and N_cc=128 condensed capsules, which we fully connect through Dynamic Routing and obtain N_t lists with N_cls elements. For each element in the list, the argument of the maximum value represents the predicted label, where “1" is a satirical text or a positive review, whereas “0" is a non-satirical text or a negative review. Secondly, for the GRU and LSTM sub-modules, we employ one layer and a hidden state dimension of 300 for both unidirectional and bidirectional versions. Finally, for the RoBERT model, we choose the base version of the Transformer with vector dimensions of 768, followed by a fully connected layer with the size of 64, tanh activation function, and a fully connected layer with N_cls output neurons. §.§ Training Parameters The number of texts chosen from SaRoCo is N_t=30,000 (15,000 satirical and 15,000 non-satirical) with a maximum N_s=5 sentences per document and N_w=60 words per sentence. For LaRoSeDa, we use 6,810 positive and 6,810 negative reviews for training, with N_s=3 sentences per document and N_w=60 words per sentence. The optimizer is Adam <cit.>, and the loss function is binary cross-entropy. We set the learning rate to 5e-5 with linear decay and train for 20 epochs. The batch size is 32, and the train/validation/test split is 70%/20%/10%. § RESULTS This section presents the performance analysis of our models from quantitative and qualitative perspectives, as well as a comparison with previous works for the chosen datasets. Initial Results. Table <ref> shows our results on the SaRoCo and LaRoSeDa datasets. The experiments with varying embeddings other than RoBERT (i.e., CC, CoRoLa, and NLPL) show that NLPL determines better performance overall. This was unexpected because CoRoLa covers over one billion Romanian tokens, while CC and NLPL contain considerably fewer tokens. For the SaRoCo dataset, the best model on the CC embeddings uses the BiGRU sub-module, achieving a 95.80% test accuracy. For the CoRoLa corpus, the GRU and BiGRU sub-modules perform equally, resulting in a 95.77% test accuracy. Also, the best NLPL embedding model considers the BiGRU sub-module, scoring a 96.15% test accuracy. On the LaRoSeDa dataset, we find the best model obtaining a 96.06% test accuracy based on GRU with NLPL embeddings. Moreover, training on the RoBERT embeddings brings the highest performance when combined with the BiGRU sub-module, achieving a test accuracy of 98.32% on SaRoCo and 98.60% on LaRoSeDa. The score differences between our results on the two datasets are less than 0.5%. Therefore, a performance difference is expected due to the more considerable proportion of data for SaRoCo. Thus, there is no concrete insight into whether the satire detection task is more complex than the sentiment analysis one, especially in the binary classification setup. Still, since the training set size for LaRoSeDa is considerably smaller than that of the SaRoCo one, the slight performance difference shows polarization support on sentiment analysis. We further assess the feature representation quality for each sub-module using the two-dimensional t-SNE visualisations upon the best-performing training results. Figure <ref> shows different clustering representations in most cases. For the SaRoCo dataset, the best delimitation is observed on the BiGRU sub-module, which is validated by the best performance achieved for the NLPL embeddings as shown in Table <ref>. A similar effect applies to the BiGRU sub-module trained and evaluated on LaRoSeDa. Considering these results, the next set of experiments is performed based on the higher performance achieved with and without BERT embeddings, namely, the BiGRU sub-module with RoBERT and NLPL embeddings, respectively. Comparison to Existing Methods. The results of Rogoz et al. <cit.> on the SaRoCo dataset show a more than 25% gain for our models compared to the BERT-ro approach, while our models outperform the character-level CNN by more than 29%. Human performance is a notable figure in deciding whether a selection of 200 news articles extracted from the dataset is satirical. Rogoz et al. <cit.> explored the idea, involving ten human annotators and indicated that the human performance is at 87.35% accuracy. Our approach surpasses this result by more than 11%. In addition, the results shown by Tache et al. <cit.> on the LaRoSeDa dataset prove the competitive performance of our proposed approach. Thus, our results are 7-8% higher than their best model, HISK+BOWE-BERT+SOMs, which comprises histogram intersection string kernels, bag-of-words with BERT embeddings, and self-organizing maps. Capsule Hyperparameter Variation. Figure <ref> depicts the hyperparameters of the capsule layers of our generic network, represented by N_pc (i.e., the number of primary capsules) and N_cc (i.e., the number of condensed capsules). We test the impact of these hyperparameters on the BiGRU sub-module with NLPL embeddings. We present the average for three runs per experiment. The chosen values for the hyperparameters are N_pc={2, 8, 32} and N_cc={32, 128, 256} (see Table <ref>). During experiments, we observed that large values for N_pc considerably impact the training time. This is mainly due to the operations over high-dimensional matrices in the squash(·) function from the iterative Dynamic Routing algorithm (see Equation <ref>). Results from Table <ref> support the intuition that a larger N_pc would bring better results. The model trained on SaRoCo with N_pc=32 achieves the highest accuracy of 96.17%; nevertheless, the difference between choosing 8 and 32 is minimal. For SaRoCo and LaRoSeDa, the best overall performance is achieved in a setting with N_cc=128, attaining accuracy scores of 96.02% and 95.46%, respectively. Based on both sets of results, we note that, for better performance, a hyperparameter search should be extended to the capsule hyperparameters. Ablation Study. Motivated by the noteworthy closeness in performance between the BiGRU-based models with NLPL and RoBERT embeddings, respectively, we perform an ablation study, slicing the generic model into four categories: baselines (i.e., NLPL-BiGRU and RoBERT-BiGRU), adversarial (Adv), Capsule, and Adv+Capsule. The best results on the test datasets are brought by the most complex models in terms of training and architecture, with a 96.02% test accuracy for SaRoCo and a 95.82% test accuracy for LaRoSeDa using the NLPL embeddings, as well as a 98.30% test accuracy for SaRoCo and a 98.61% test accuracy for LaRoSeDa using the RoBERT embeddings (see Table <ref>). Regarding model complexity, we determine that except for the adversarial training on a baseline BiGRU model, the performance improves when capsule layers are added on top of it, irrespective of including the perturbed data in training. The increase in performance on the SaRoCo dataset with our model is by 0.45% for the NLPL embeddings and by 0.10% for the RoBERT embeddings. We observe a decrease of 2.73% when the most undersized model (i.e., NLPL-BiGRU) is compared with the most complex one (i.e., RoBERT-BiGRU+Adv+Capsule). For the LaRoSeDa dataset, we gain 1.18% using the NLPL embeddings and 0.45% with the RoBERT embeddings, respectively. Also, the test accuracy difference between the most complex and the most undersized models is 3.97%, determining that the network conveys more value for the sentiment analysis task. The two-dimensional t-SNE embeddings depicted in Figure <ref> show the contrast between the capsule- and non-capsule-based models. The embeddings obtained with the BiGRU alone feature a specific chained distribution, with clusters defined by halving the sequence. The RoBERT embeddings convey a similar partition. In contrast, the capsule networks will mostly feature well-separated embedding clusters. No significant embedding change occurs when adversarial training is included. Data Augmentation. Next, we incorporate the RoGPT-2 text continuation examples on a set of samples using two strategies for the decoder (i.e., greedy and beam-search-2). We perform experiments with the RoBERT-BiGRU model and show that the generative effort increases the overall performance for both tasks (see Table <ref>). In most cases, the RoBERT embeddings bring increased performance on the LaRoSeDa dataset as a consequence of the polarized effect of the product reviews, being strongly positive or negative. This polarization impact also applies to the models trained on augmented data. Data augmentation using the greedy decoder method achieves the best performance on SaRoCo, with a 99.08% test accuracy, employing 10,000 expanded texts, compared with the best accuracy of 98.68% obtained with beam-search-2. Furthermore, on LaRoSeDa, we determine similar performance on the greedy search algorithm with the best accuracy of 98.94% for 10,000 augmented texts. However, for the second dataset, more generated data will not necessarily determine the best performance as in the beam-search-2 scenario, using 10,000 augmented texts slightly underperforms in contrast with 5,000 examples. Discussions. RoBERT-BiGRU, augmented with RoGPT-2 samples, correctly classifies 1,344 out of 1,362 examples from the LaRoSeDa test dataset. Due to spatial constraints, Table <ref> depicts only the shortest eight misclassified texts out of 18, for which ground truth, predicted, and human annotated labels are shown. Two human annotators concluded from these examples that three indecisions and five classifications contradict the expected ones. The uncertain results and the negative misclassifications are expected to have been 3-out-of-5 stars ratings, which were assumed negative when the dataset was created. Furthermore, we observe strongly positive texts such as “I like it. A feminine bracelet that does its job well", “I was very satisfied with it", “happy about the product", “I recommend it", and “pleased! it is a very good clear sound!" have negative ground truth in the dataset. However, these are positive examples for the model and human annotators. Thus, we determine noise in the LaRoSeDa dataset, which is expected for datasets gathered from online sources, as the origin of the noise can be introduced by the page user or by automated data extractors. § CONCLUSIONS Satire detection and sentiment analysis are important NLP tasks for which literature provides an ample palette of models and applications. Despite the more polarization expected on the product review task in contrast with the increased passivity of satirical texts, our models properly encapsulate the meaning represented by relevant features. In the syntactic and semantic context of our tasks, there is a slight difference in performance for the CC, CoRoLa, and NLPL embeddings, whereas fine-tuning the pre-trained RoBERT model brings up to 3% performance improvement. We showed in many experiments that our parameterized capsule framework can be adapted to specific problems. Moreover, we can improve the capsule network by employing data augmentation using generative models such as RoGPT-2, achieving a maximum gain of 0.6%. Based on our results, the potential of such an architecture is of increased significance, thus enabling further work in this direction. § ACKNOWLEDGEMENTS The research has been funded by the University Politehnica of Bucharest through the PubArt program. splncs04
http://arxiv.org/abs/2306.04830v1
20230607231908
Extended Neighboring Extremal Optimal Control with State and Preview Perturbations
[ "Amin Vahidi-Moghaddam", "Kaixiang Zhang", "Zhaojian Li", "Xunyuan Yin", "Ziyou Song", "Yan Wang" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Extended Neighboring Extremal Optimal Control with State and Preview Perturbations Amin Vahidi-Moghaddam, Kaixiang Zhang, Zhaojian Li^*, Xunyuan Yin, Ziyou Song, and Yan Wang ^*Zhaojian Li is the corresponding author. Amin Vahidi-Moghaddam, Kaixiang Zhang, and Zhaojian Li are with the Department of Mechanical Engineering, Michigan State University, East Lansing, MI 48824 USA (e-mail: [email protected], [email protected], [email protected]). Xunyuan Yin is with the School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, 637459 Singapore (e-mail: [email protected]). Ziyou Song is with the Department of Mechanical Engineering, National University of Singapore, 117575 Singapore (email: [email protected]). Yan Wang is with the Research and Advanced Engineering, Ford Motor Company, Dearborn, MI 48121 USA (e-mail: [email protected]). July 31, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Optimal control schemes have achieved remarkable performance in numerous engineering applications. However, they typically require high computational cost, which has limited their use in real-world engineering systems with fast dynamics and/or limited computation power. To address this challenge, Neighboring Extremal (NE) has been developed as an efficient optimal adaption strategy to adapt a pre-computed nominal control solution to perturbations from the nominal trajectory. The resulting control law is a time-varying feedback gain that can be pre-computed along with the original optimal control problem, and it takes negligible online computation. However, existing NE frameworks only deal with state perturbations while in modern applications, optimal controllers (e.g., predictive controllers) frequently incorporate preview information. Therefore, a new NE framework is needed to adapt to such preview perturbations. In this work, an extended NE (ENE) framework is developed to systematically adapt the nominal control to both state and preview perturbations. We show that the derived ENE law is two time-varying feedback gains on the state perturbation and the preview perturbation. We also develop schemes to handle nominal non-optimal solutions and large perturbations to retain optimal performance and constraint satisfaction. Case study on nonlinear model predictive control is presented due to its popularity but it can be easily extended to other optimal control schemes. Promising simulation results on the cart inverted pendulum problem demonstrate the efficacy of the ENE algorithm. Note to Practitioners. Due to the vast success in predictive control and advancement in sensing, modern control applications have frequently been incorporating preview information in the control design. For example, the road profile preview obtained from vehicle crowdsourcing is exploited for simultaneous suspension control and energy harvesting, demonstrating a significant performance enhancement using the preview information despite noises in the preview <cit.>. Another example is thermal management for cabin and battery of hybrid electric vehicles, where traffic preview is employed in hierarchical model predictive control to improve energy efficiency <cit.>. In <cit.>, light detection and ranging systems are used to provide wind disturbance preview to enhance the controls of turbine blades. In <cit.>, virtual water preview is employed using integrated water resources management modelling to optimize agricultural patterns and control level of water in lakes. In this work, we develop an extended neighboring extremal framework that can adapt a nominal control law to state and preview perturbations simultaneously. This setup is widely applicable as in many applications, a nominal preview is available while the preview signal can also be measured or estimated online. § INTRODUCTION Optimal control approaches, such as model predictive control (MPC), can explicitly handle system constraints while achieving optimal closed-loop performance <cit.>. However, such controllers typically involve solving optimization problems at each step and are thus computationally expensive, especially for nonlinear systems with non-convex constraints. This has hindered their wider adoption in applications with fast dynamics and/or limited computation resources <cit.>. The main motivation of this work is to address the high computational cost of the optimal control approaches such that it tackles the limitations of existing frameworks for modern applications which incorporate preview information. As such, several frameworks have been developed to improve the computational efficiency of the optimal controllers. One approach is to simplify the system dynamics with model-reduction techniques <cit.>. However, these techniques require a trade-off between system performance and computational complexity, and it is often still computationally expensive after the model reduction. Another sound approach is to use function approximators, where functions such as neural networks <cit.>, Gaussian process regression <cit.>, and spatial temporal filters <cit.> are exploited to learn the control policy, after which the learned policy is employed online to achieve efficient onboard computations. However, extensive data collection is required to ensure a comprehensive coverage of operating conditions. Furthermore, the learned policy lacks interpretability, and it is generally challenging to retain guaranteed system constraint satisfaction. Neighboring extremal (NE) <cit.> is another promising paradigm to attain efficient computations by proposing a time-varying feedback gain on the state deviations. Specifically, given a pre-computed nominal solution based on a nominal initial state, the NE yields an optimal adaptation law (to the first order) that adapts the control to deviations from the nominal state, incurring negligible online computation while achieving (sub-)optimal performance when perturbations occur. The nominal solution can be computed offline and stored online, can be performed on a remote powerful controller (e.g., cloud), or computed ahead of time by utilizing the idling time of the processor. The NE framework has been employed in several engineering systems, including ship maneuvering control <cit.>, power management <cit.>, full bridge DC/DC converter <cit.>, and spacecraft relative motion maneuvers <cit.>. In <cit.>, disturbance perturbations have been considered for the NE in the nonlinear optimal control problems; however, the formulation derived is limited to a constant disturbance. Moreover, using a parameter estimation for the systems with unknown parameters, parameter perturbations are considered for the NE, where the estimated parameters are considered constant during the predictions of the optimal control problem <cit.>. In this work, we aim to develop an extended NE (ENE) framework for the nonlinear optimal control problems to efficiently adapt a pre-computed nominal solution to both state and preview perturbations. The contributions of this work are: First, we formulate the ENE framework so that it surpasses the existing NE frameworks to explicitly handle both state and preview perturbations. This is a generalization of existing work <cit.>, where only state perturbation is considered. Moreover, we treat the ENE problem when nominal non-optimal solution and large perturbations are present, and a multi-segment strategy is employed to guarantee constraint satisfaction in the presence of large perturbations. Furthermore, promising results are demonstrated by applying the developed control strategy to the cart inverted pendulum problem. Compared to our conference paper <cit.>, we extend the NE framework so that we do not need to return the NMPC at several time steps to handle the time-varying preview perturbations, which significantly reduces the computational cost. These contributions are important extensions as in modern applications, optimal controllers frequently incorporate preview information (e.g., from a preview prediction model <cit.>), and it is critical to adapt to the preview deviations to retain good system performance. The paper is outlined as: Section II describes the problem formulation and the preliminaries of the nonlinear optimal control problems. The proposed ENE framework is presented in Section III. Simulation on the cart-inverted pendulum is presented in Section IV. Finally, Section V discusses conclusions and future works. § PROBLEM FORMULATION In this section, preliminaries on nonlinear optimal control problems are reviewed, and perturbation analysis problem on the optimal solution is presented for the nonlinear systems with state and preview perturbations. Specifically, the following discrete-time nonlinear system, that incorporates a system preview, is considered as: x(k+1) = f(x(k),u(k),w(k)), where k ∈ℕ^+ represents the time step, x ∈ℝ^n denotes the measurable/observable states, and u ∈ℝ^m is the control input. Here w ∈ℝ^n represents the preview information that can be road profile preview in suspension controls <cit.>, wind preview for turbine controls <cit.>, and traffic preview in vehicle power management <cit.>. Furthermore, f:ℝ^n×ℝ^m×ℝ^n→ℝ^n represents the system dynamics with f(0,0,0)=0. Moreover, we consider the following general nominal preview model: w(k+1) = g(x(k),w(k)), where g:ℝ^n×ℝ^n →ℝ^n represents the nominal preview dynamics. We consider the following safety constraints for the system: C(x(k),u(k),w(k)) ≤ 0, where C:ℝ^n×ℝ^m×ℝ^n →ℝ^l. Consider the nonlinear system (<ref>) and the control objective of regulating the state x. Starting from the initial conditions x_0 and w_0, the closed-loop system performance over N steps is characterized by the following cost function: J_N(𝐱,𝐮,𝐰)= ∑^N-1_k=0ϕ(x(k),u(k),w(k)) + ψ(x(N),w(N)), where 𝐱 = [ x(0), x(1), ⋯, x(N) ], 𝐮 = [ u(0), u(1), ⋯, u(N-1) ], 𝐰 = [ w(0), w(1), ⋯, w(N) ], and ϕ(x,u,w) and ψ(x,w) denote the stage and terminal costs, respectively. [Twice Differentiable Functions] The functions f, g, C, ϕ, and ψ are twice continuously differentiable. With the defined closed-loop performance metric, the control goal is to minimize the cost function (<ref>) while adhering to the constraints in (<ref>)-(<ref>). The optimal control aims at optimizing the system performance over N future steps for the system (<ref>) using the nominal preview model (<ref>), which is expressed as the following constrained optimization problem: (𝐱^o,𝐮^o) = 𝐱,𝐮min J_N(𝐱,𝐮,𝐰) s.t. x(k+1) = f(x(k),u(k),w(k)), w(k+1) = g(x(k),w(k)), C(x(k),u(k),w(k)) ≤ 0, x(0)=x_0, w(0)=w_0. Consider a nominal trajectory 𝐱^o, 𝐮^o, and 𝐰^o obtained by solving (<ref>) with 𝐰^o being the nominal preview. This computation can be performed on a remote powerful controller (e.g., cloud computing or edge computing) or can be computed ahead of time based on an approximated initial state. During implementation, the actual state x(k) and the preview information w(k) will likely deviate from the nominal trajectory. Let δ x(k) = x(k) - x^o(k) and δ w(k) = w(k) - w^o(k) denote the state perturbation and the preview perturbation, respectively. Now, to solve the nonlinear optimal control problem (<ref>) for the actual values at each time step k, we seek a (sub-)optimal control update law, u^*(k) = u^o(k) + δ u(k), to efficiently adapt to the perturbations of the nominal trajectory. As such, using the nominal trajectory and the perturbation analysis, we develop an ENE framework to account for both state and preview perturbations through two time-varying feedback gains. Moreover, to handle large perturbations, we modify the ENE algorithm to preserve constraint satisfaction and retain optimal control performance. The details of each algorithm and their benefits for nonlinear model predictive control will be presented in the next part. § MAIN RESULT In this section, we present an ENE framework for the optimal control problem (<ref>) subject to state and preview perturbations. As shown in Fig. 1, a nominal state and control trajectory is first computed based on system specifications (e.g., nominal model, nominal preview model, constraints, and cost function) along with a nominal initial state and preview. Then, the ENE approach exploits time-varying feedback gains to adapt to state and preview perturbations to retain optimal control performance. In the following subsections, we first analyze the nominal optimal solution to (<ref>) and perform the perturbation analysis to obtain an efficient optimal feedback law for small perturbations. We then develop schemes to handle large perturbations and maintain well control performance and constraint satisfaction. §.§ Nominal Optimal Solution In this subsection, we analyze the nominal optimal solutions using the Karush-Kuhn-Tucker (KKT) conditions. Specifically, define 𝕂^a and 𝕂^i as the sets of time steps at which the constrains are active (i.e., C(x(k),u(k),w(k)) = 0 in (<ref>)) and inactive (i.e., C(x(k),u(k),w(k)) < 0), respectively. From (<ref>), the Hamiltonian function and the augmented cost function are defined as: H(k) = ϕ(x(k),u(k),w(k)) + λ^T(k+1) f(x(k),u(k),w(k)) + λ̅^T(k+1) g(x(k),w(k)) + μ^T(k) C^a(x(k),u(k),w(k)), J̅_N(k) = ∑^N-1_k=0 (H(k) - λ^T(k+1) x(k+1) - λ̅^T(k+1) w(k+1)) + ψ(x(N),w(N)), where C^a(x(k),u(k),w(k)) represents the active constraints at the time step k. It is worth noting that C^a(x(k),u(k),w(k)) is an empty vector for inactive constraints, and C^a(x(k),u(k),w(k)) ∈ℝ^l^a if we have l^a (out of l) active constraints. Furthermore, μ(k) ∈ℝ^l^a is the Lagrange multiplier for the active constraints, and λ(k+1) ∈ℝ^n and λ̅(k+1) ∈ℝ^n represent the Lagrange multipliers for the system dynamics (<ref>) and the nominal preview model (<ref>), respectively. It is worth noting that the Lagrange multipliers μ(k), λ(k+1), and λ̅(k+1) are also referred as the co-states. [Active Constraints] At each time step k, the number of active constraints is not greater than m, i.e., C^a_u(k) is full row rank. Since x^o(k), u^o(k), and w^o(k) (k∈ [0,N]) represent the nominal optimal solution for the nonlinear optimal control problem (<ref>), they satisfy the following KKT conditions for the augmented cost function (<ref>): H_u(k) = 0, k = 0, 1, ..., N-1, λ(k) = H_x(k), k = 0, 1, ..., N-1, λ(N) = ψ_x(x(N),w(N)), λ̅(k) = H_w(k), k = 0, 1, ..., N-1, λ̅(N) = ψ_w(x(N),w(N)), μ(k) ≥ 0, k = 0, 1, ..., N-1, where the subscripts u, x, and w represent the partial derivatives of a function. Now, using the KKT conditions and the nominal solution x^o(k), u^o(k), and w^o(k), one can calculate the Lagrange multipliers μ(k), λ(k+1), and λ̅(k+1) online, that is, from (<ref>), it follows that 0 = ϕ_u(x^o,u^o,w^o) + λ^T(k+1) f_u(x^o,u^o,w^o) + μ^T(k) C^a_u(x^o,u^o,w^o), λ(k) = ϕ_x(x^o,u^o,w^o) + λ^T(k+1) f_x(x^o,u^o,w^o) + λ̅^T(k+1) g_x(x^o,w^o) + μ^T(k) C^a_x(x^o,u^o,w^o), λ(N) = ψ_x(x^o(N),w^o(N)), λ̅(k) = ϕ_w(x^o,u^o,w^o) + λ^T(k+1) f_w(x^o,u^o,w^o) + λ̅^T(k+1) g_w(x^o,w^o) + μ^T(k) C^a_w(x^o,u^o,w^o), λ̅(N) = ψ_w(x^o(N),w^o(N)). Using the above equations, the Lagrange multipliers can be obtained as: μ(k) = -(C^a_u(k) C^a_u^T(k))^-1 C^a_u(k) ϕ^T_u(k) -(C^a_u(k) C^a_u^T(k))^-1 C^a_u(k) f^T_u(k) λ(k+1), λ(k) = ϕ_x(k) + λ^T(k+1) f_x(k) + λ̅^T(k+1) g_x(k) + μ^T(k) C^a_x(k), λ̅(k) = ϕ_w(k) + λ^T(k+1) f_w(k) + λ̅^T(k+1) g_w(k) + μ^T(k) C^a_w(k). Note that Assumption <ref> guarantees that C^a_u(k) C^a_u^T(k) is invertible, and δJ̅_N(x^o,u^o,w^o,μ^o, λ^o, λ̅^o) = 0. §.§ Extended Neighboring Extremal For this part, we assume that the state and preview perturbations are small enough such that they do not change the activity status of the constraint. To adapt to state and preview perturbations from the nominal values, the ENE seeks to minimize the second-order variation of (<ref>) subject to linearized models and constraints. More specifically, the ENE algorithm solves the following optimization problem with the initial conditions δ x(0) and δ w(0) as: δ 𝐮^* = δ 𝐮minJ^ne_N(k) s.t.δ x(k+1) = f_x(k) δ x(k) + f_u(k) δ u(k)+f_w(k) δ w(k), δ w(k+1) = g_x(k) δ x(k) + g_w(k) δ w(k), C^a_x(k) δ x(k) + C^a_u(k) δ u(k) + C^a_w(k) δ w(k) = 0, δ x(0) = δ x_0, δ w(0) = δ w_0, where J^ne_N(k)=δ^2J̅_N(k) = 1/2∑^N-1_k=0[ δ x(k); δ u(k); δ w(k) ]^T[ H_xx(k) H_xu(k) H_xw(k); H_ux(k) H_uu(k) H_uw(k); H_wx(k) H_wu(k) H_ww(k) ][ δ x(k); δ u(k); δ w(k) ] + 1/2δ x^T (N) ψ_xx(N) δ x (N) + 1/2δ w^T (N) ψ_ww(N) δ w (N) For (<ref>) and (<ref>), the Hamiltonian function and the augmented cost function are obtained as H^ne(k) = 1/2[ δ x(k); δ u(k); δ w(k) ]^T[ H_xx(k) H_xu(k) H_xw(k); H_ux(k) H_uu(k) H_uw(k); H_wx(k) H_wu(k) H_ww(k) ][ δ x(k); δ u(k); δ w(k) ] + δλ^T(k+1) (f_x(k) δ x(k) + f_u(k) δ u(k) + f_w(k) δ w(k)) + δλ̅^T(k+1) (g_x(k) δ x(k) + g_w(k) δ w(k)) + δμ^T(k) (C^a_x(k) δ x(k) + C^a_u(k) δ u(k) + C^a_w(k) δ w(k)), J̅^ne_N(k) = ∑^N-1_k=0 (H^ne(k) - δλ^T(k+1) δ x(k+1) - δλ̅^T(k+1) δ w(k+1)) + 1/2δx^T (N) ψ_xx(N) δ x (N) + 1/2δw^T (N) ψ_ww(N) δ w (N), where δμ(k), δλ(k), and δλ̅(k) are the Lagrange multipliers. By applying the KKT conditions to (<ref>), one has H^ne_δ u(k) = 0, k = 0, 1, ..., N-1, δλ(k) = H^ne_δ x(k), k = 0, 1, ..., N-1, δλ(N) = ψ_xx(N) δ x(N), δλ̅(k) = H^ne_δ w(k), k = 0, 1, ..., N-1, δλ̅(N) = ψ_ww(N) δ w(N), δμ(k) ≥ 0, k = 0, 1, ..., N-1. To facilitate the development of the ENE algorithm, several auxiliary variables are introduced for the following theorem. Specifically, for k=1,2, ⋯, N-1, S(k), W(k), S̅(k), and W̅(k) are defined as S(k) = Z_xx(k) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ Z_ux(k); C^a_x(k) ], W(k) = Z_x w(k) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ Z_u w(k); C^a_w(k) ], S̅(k) = Z_wx(k) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ Z_ux(k); C^a_x(k) ], W̅(k) = Z_ww(k) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ Z_u w(k); C^a_w(k) ], where K^o(k) =[ Z_uu(k) C^a_u^T(k); C^a_u(k) 0 ]^-1 if k ∈𝕂^a, [ Z^-1_uu(k) 0; 0 0 ] if k ∈𝕂^i, and Z_ux(k) = H_ux(k) + f^T_u(k) S(k+1) f_x(k) + f^T_u(k) W(k+1) g_x(k), Z_uu(k) = H_uu(k) + f^T_u(k) S(k+1) f_u(k), Z_u w(k) = H_u w(k) + f^T_u(k) S(k+1) f_w(k) + f^T_u(k) W(k+1) g_w(k), Z_xx(k) = H_xx(k) + f^T_x(k) S(k+1) f_x(k) +f^T_x(k) W(k+1) g_x(k) + g^T_x(k) S̅(k+1) f_x(k) + g^T_x(k) W̅(k+1) g_x(k), Z_xu(k) = H_xu(k) + f^T_x(k) S(k+1) f_u(k) + g^T_x(k) S̅(k+1) f_u(k), Z_x w(k) = H_x w(k) + f^T_x(k) S(k+1) f_w(k) + f^T_x(k) W(k+1) g_w(k)+ g^T_x(k) S̅(k+1) f_w(k) + g^T_x(k) W̅(k+1) g_w(k), Z_wx(k) = H_wx(k) + f^T_w(k) S(k+1) f_x(k) + f^T_w(k) W(k+1) g_x(k)+ g^T_w(k) S̅(k+1) f_x(k) + g^T_w(k) W̅(k+1) g_x(k), Z_wu(k) = H_wu(k) + f^T_w(k) S(k+1) f_u(k) + g^T_w(k) S̅(k+1) f_u(k), Z_ww(k) = H_ww(k) + f^T_w(k) S(k+1) f_w(k) + f^T_w(k) W(k+1) g_w(k) + g^T_w(k) S̅(k+1) f_w(k) + g^T_w(k) W̅(k+1) g_w(k). The terminal conditions for S(k), W(k), S̅(k), and W̅(k) are given by S(N) = ψ_xx(N), W(N) = 0, S̅(N) = 0, W̅(N) = ψ_ww(N). Consider the optimization problem (<ref>), the Hamiltonian function (<ref>), the KKT conditions (<ref>), and the defined auxiliary variables (<ref>) and (<ref>). If Z_uu(k)>0 for k∈[ 0, N-1 ], then the ENE policy δ u(k) = K^*_1(k) δ x(k) + K^*_2(k) δ w(k), K^*_1(k) = - [ I 0 ] K^o(k) [ Z_ux(k); C^a_x(k) ], K^*_2(k) = - [ I 0 ] K^o(k) [ Z_u w(k); C^a_w(k) ], approximates the perturbed solution for the nonlinear optimal control problem (<ref>) in the presence of state perturbation δ x(k) and preview perturbation δ w(k). Using (<ref>), (<ref>), and the KKT conditions (<ref>), one has H_ux(k) δ x(k) + H_uu(k) δ u(k) + H_u w(k) δ w(k) + f^T_u(k) δλ(k+1) + C^a_u^T(k) δμ(k) = 0, δλ(k) = H_xx(k) δ x(k) + H_xu(k) δ u(k) + H_x w(k) δ w(k) + f^T_x(k) δλ(k+1) + g^T_x(k) δλ̅(k+1) + C^a_x^T(k) δμ(k), δλ̅(k) = H_wx(k) δ x(k) + H_wu(k) δ u(k) + H_ww(k) δ w(k) + f^T_w(k) δλ(k+1) + g^T_w(k) δλ̅(k+1) + C^a_w^T(k) δμ(k), where δλ(N) = ψ_xx(N) δ x(N) and δλ̅(N) = ψ_ww(N) δ w(N). Now, define the following general relation: δλ(k) = S(k) δ x(k) + W(k) δ w(k) +T(k), δλ̅(k) = S̅(k) δ x(k) + W̅(k) δ w(k) +T̅(k). Using (<ref>), (<ref>), and (<ref>), one has T(N) = 0 and T̅(N) = 0. Substituting the linearized model (<ref>) and (<ref>) into (<ref>) yields Z_ux(k) δ x(k) + Z_uu(k) δ u(k) + Z_u w(k) δ w(k) + C^a_u^T(k) δμ(k) + f^T_u(k) T(k+1) = 0. Using the linearized safety constraints (<ref>) and (<ref>), one has [ δ u(k); δμ(k) ] = -K^o(k) [ Z_ux(k); C^a_x(k) ]δ x(k) -K^o(k) [ Z_u w(k); C^a_w(k) ]δ w(k) -K^o(k) [ f^T_u(k) T(k+1); 0 ]. Now, substituting the model (<ref>), (<ref>) and (<ref>) into (<ref>) yields δλ(k) = Z_xx(k) δ x(k) + Z_xu(k) δ u(k) + Z_x w(k) δ w(k) + C^a_x^T(k) δμ(k) + f^T_x(k) T(k+1) + g^T_x(k) T̅(k+1). Furthermore, substituting (<ref>) into (<ref>) yields δλ(k) = ( Z_xx(k) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ Z_ux(k); C^a_x(k) ]) δ x(k) + ( Z_x w(k) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ Z_u w(k); C^a_w(k) ]) δ w(k) + f^T_x(k) T(k+1) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ f^T_u(k) T(k+1); 0 ] + g^T_x(k) T̅(k+1). From (<ref>), (<ref>) and (<ref>), it can be concluded that T(k) = g^T_x(k) T̅(k+1) + f^T_x(k) T(k+1) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ f^T_u(k) T(k+1); 0 ]. Now, substituting the model (<ref>), (<ref>) and (<ref>) into (<ref>) yields δλ̅(k) = Z_wx(k) δ x(k) + Z_wu(k) δ u(k) + Z_ww(k) δ w(k) + C^a_w^T(k) δμ(k) + f^T_w(k) T(k+1) + g^T_w(k) T̅(k+1). Furthermore, plugging (<ref>) into (<ref>) yields δλ̅(k) = ( Z_wx(k) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ Z_ux(k); C^a_x(k) ]) δ x(k) + ( Z_w w(k) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ Z_u w(k); C^a_w(k) ]) δ w(k) + f^T_w(k) T(k+1) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ f^T_u(k) T(k+1); 0 ] + g^T_w(k) T̅(k+1). Using (<ref>), (<ref>) and (<ref>), one has T̅(k) = g^T_w(k) T̅(k+1) + f^T_w(k) T(k+1) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ f^T_u(k) T(k+1); 0 ]. Based on (<ref>), (<ref>), and the fact that T(N)=0, T̅(N)=0, one can conclude that for k∈[ 1, N-1 ], T(k)=0, T̅(k)=0. Thus, by using (<ref>), the ENE policy (<ref>) can be obtained. This completes the proof. It is worth noting that the assumption of Z_uu being positive definite (i.e., Z_uu(k)>0, k ∈[ 0, N-1 ]) is essential for the ENE. Z_uu(k)>0 is performed to calculate the ENE such that it guarantees the convexity of (<ref>). Considering Z_uu(k)>0 and Assumption <ref>, it is clear that K^o(k) in (<ref>) is well defined. However, when the constraints involve only state and preview (i.e., C^a_u(k)=0), or when l^a is greater than m (i.e., C^a_u(k) is not full row rank), the matrix K^o is singular, leading to the failure of the proposed algorithm. This issue can be solved using the constraint back-propagation algorithm presented in <cit.>. If we do not have any idea about the nominal preview model (<ref>) for the existing preview information in the real system, we can simply use w(k+1)=w(k) as the nominal preview model for the nonlinear optimal control problem (<ref>) and the ENE algorithm. However, it is clear that we achieve the best performance using the ENE when the nominal preview model describes the preview information perfectly. Algorithm 1 summarizes the ENE procedure for adaptation the pre-computed nominal control solution u^o(k) to the small state perturbation δ x(k) and the small preview perturbation δ w(k) such that it achieves the optimal control as u^*(k) = u^o(k) + δ u(k) using Theorem <ref>. §.§ Nominal Non-Optimal Solution and Large Perturbations The ENE is derived under the assumption that a nominal optimal solution is available, and the state and preview perturbations are small such that they do not change the activity status of the constraints. In this subsection, we modify the ENE policy for a nominal non-optimal solution and accordingly improve the algorithm to handle large state and preview perturbations which may change the sets of inactive and active constraints. For the nominal non-optimal sequences x^o(k), u^o(k), w^o(k), μ^o(k), λ^o(k), and λ̅^o(k), we assume that they satisfy the constraints described in (<ref>) and (<ref>) but may not satisfy the optimality condition H_u(x^o,u^o,w^o,μ^o, λ^o, λ̅^o) = 0. Under this circumstance, the cost function (<ref>) is modified as J^ne_N(k)=δ^2J̅_N(k) + ∑^N-1_k=0H^T_u(k) δ u(k)= 1/2∑^N-1_k=0[ δ x(k); δ u(k); δ w(k) ]^T[ H_xx(k) H_xu(k) H_xw(k); H_ux(k) H_uu(k) H_uw(k); H_wx(k) H_wu(k) H_ww(k) ][ δ x(k); δ u(k); δ w(k) ] + 1/2δ x^T (N) ψ_xx(N) δ x (N) + 1/2δ w^T (N) ψ_ww(N) δ w (N) + ∑^N-1_k=0H^T_u(k) δ u(k). Considering the optimal control problem (<ref>) and the cost function (<ref>), the Hamiltonian function is modified as H^ne(k) = 1/2[ δ x(k); δ u(k); δ w(k) ]^T[ H_xx(k) H_xu(k) H_xw(k); H_ux(k) H_uu(k) H_uw(k); H_wx(k) H_wu(k) H_ww(k) ][ δ x(k); δ u(k); δ w(k) ] + δλ^T(k+1) (f_x(k) δ x(k) + f_u(k) δ u(k) + f_w(k) δ w(k)) + δλ̅^T(k+1) (g_x(k) δ x(k) + g_w(k) δ w(k)) + δμ^T(k) (C^a_x(k) δ x(k) + C^a_u(k) δ u(k) + C^a_w(k) δ w(k)) +H^T_u(k) δ u(k). Now, the following theorem is presented to modify the ENE policy for the nominal non-optimal solutions to the nonlinear optimal control problem (<ref>). Consider the optimization problem (<ref>), the KKT conditions (<ref>), and the Hamiltonian function (<ref>). If Z_uu(k)>0 for k∈[ 0, N-1 ], then the ENE policy for a nominal non-optimal solution is modified as δ u(k) = K^*_1(k) δ x(k) + K^*_2(k) δ w(k) + K^*_3(k) [ f^T_u(k) T(k+1) + H_u(k); 0 ], K^*_3(k) = - [ I 0 ] K^o(k), where the gain matrices K^*_1, K^*_2, and K^o(k) are defined in (<ref>) and (<ref>), and T(k) is a non-zero variable defined in (<ref>). Using (<ref>) and (<ref>), (23) is modified as H_ux(k) δ x(k) + H_uu(k) δ u(k) + H_u w(k) δ w(k) + f^T_u(k) δλ(k+1) + C^a_u^T(k) δμ(k) + H_u(k) = 0. Substituting the linearized model (<ref>) and (<ref>) into (<ref>) yields Z_ux(k) δ x(k) + Z_uu(k) δ u(k) + Z_u w(k) δ w(k) + C^a_u^T(k) δμ(k) + f^T_u(k) T(k+1) + H_u(k) = 0. Using the linearized safety constraints (<ref>) and (<ref>), one can obtain [ δ u(k); δμ(k) ] = -K^o(k) [ Z_ux(k); C^a_x(k) ]δ x(k) -K^o(k) [ Z_u w(k); C^a_w(k) ]δ w(k) -K^o(k) [ f^T_u(k) T(k+1); 0 ] -K^o(k) [ H_u(k); 0 ]. Substituting (<ref>) into (<ref>) yields δλ(k) = ( Z_xx(k) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ Z_ux(k); C^a_x(k) ]) δ x(k) + ( Z_x w(k) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ Z_u w(k); C^a_w(k) ]) δ w(k) + f^T_x(k) T(k+1) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ f^T_u(k) T(k+1); 0 ] + g^T_x(k) T̅(k+1) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ H_u(k); 0 ]. From (<ref>), (<ref>), and (<ref>), it follows that T(k) = g^T_x(k) T̅(k+1) + f^T_x(k) T(k+1) - [ Z_xu(k) C^a_x^T(k) ] K^o(k) [ f^T_u(k) T(k+1) + H_u(k); 0 ]. Now, plugging (<ref>) into (<ref>) yields δλ̅(k) = ( Z_wx(k) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ Z_ux(k); C^a_x(k) ]) δ x(k) + ( Z_w w(k) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ Z_u w(k); C^a_w(k) ]) δ w(k) + f^T_w(k) T(k+1) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ f^T_u(k) T(k+1); 0 ] + g^T_w(k) T̅(k+1) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ H_u(k); 0 ]. Using (<ref>), (<ref>), and (<ref>), one has T̅(k) = g^T_w(k) T̅(k+1) + f^T_w(k) T(k+1) - [ Z_wu(k) C^a_w^T(k) ] K^o(k) [ f^T_u(k) T(k+1) + H_u(k); 0 ]. Based on (<ref>), (<ref>) and (<ref>), the modified ENE policy (<ref>) is obtained. This completes the proof. Now, to deal with large perturbations that may change the sets of inactive and active constraints, the perturbed values of C(x(k),u(k),w(k)) and μ (k) are analyzed to determine the inactive and active constraints under the perturbations. Using (<ref>), the relation between the state and preview perturbations and the Lagrange multiplier perturbation is expressed as δμ(k) = K^*_4(k) δ x(k) + K^*_5(k) δ w(k), K^*_4(k) = - [ 0 I ] K^o(k) [ Z_ux(k); C^a_x(k) ], K^*_5(k) = - [ 0 I ] K^o(k) [ Z_u w(k); C^a_w(k) ]. Moreover, using (<ref>), the constraint perturbation is represented as δ C(k) = C_x(k) δ x(k) + C_u(k) δ u(k) + C_w(k) δ w(k) = (C_x(k)+C_u(k)K^*_1(k)) δ x(k) + (C_w(k)+C_u(k)K^*_2(k)) δ w(k) + C_u(k)K^*_3(k) (f^T_u(k) T(k+1) + H_u(k)). The perturbed Lagrange multiplier and the perturbed constraint are given by μ(k) = μ^o(k) + δμ(k), C(k) = C^o(k) + δ C(k). Different activity statuses of the constraints may occur due to large perturbations. To address this issue, we consider a line that connects the nominal variables x^o(0) and w^o(0) to the perturbed variables x(0) and w(0). For the connecting line, we identify several intermediate points such that the status of the constraint remains the same between two consecutive points. Since μ(k) = 0 and C(k) = 0 for the inactive and active constraints, respectively, we use (<ref>) for the active constraints to find the intermediate points which make the constraints inactive. Specifically, for the active constraints, an α(k) (0≤α (k)≤1) is computed to have μ^o(k) + α (k) δμ(k) = 0. Moreover, we employ (<ref>) for the inactive constraints to find the intermediate points which make the constraints active. For the inactive constraints, the α (k) is computed to have C^o(k) + α (k) δ C(k) = 0. Thus, for k∈ [0, N-1], the intermediate points are achieved using the following equation: α(k) = -μ^o(k)/δμ(k) if k ∈𝕂^a , -C^o(k)/δ C(k) if k ∈𝕂^i. The smallest α(k) is found such that the obtained perturbation changes the activity statuses of the constraints at least at one time step k. Algorithm 2 summarizes the modified ENE procedure for adaptation the pre-computed nominal non-optimal control solution to the large state and preview perturbations such that it achieves the optimal control as u^*(k) = u^o(k) + δ u(k) using Theorem <ref>. The algorithm identifies the intermediate points and determines the modified ENE adaptation policy. Considering suitable nominal models, the main design parameters of the proposed approach come from the original optimization problem (<ref>), which are the prediction number N and the designing weights in the stage cost ϕ(x,u,w) and the terminal cost ψ(x,w). The prediction number N must be high enough so that the obtained optimal controller stabilizes the system; however, higher N causes higher computational cost to solve the optimization problem. Moreover, the designing weights in the costs must be selected such that both minimum tracking error and minimum control input are achieved. The proposed ENE framework is easy to implement and light in computation. Specifically, given a nominal initial state x^o(0), a nominal preview w^o(0:N), a control objective function to minimize, system and control constraints, a nominal optimal state and control trajectory x^o,u^o will be computed using an optimal control strategy. Note that this nominal solution can be computed offline and stored online, can be performed on a remote powerful controller (e..g, cloud), or computed ahead of time by utilizing the idling time of the processor. In the same time, the ENE adaptation gains K^*_1(k), K^*_2(k),k=0,1,⋯,N-1 in (<ref>) can also be computed along with the nominal control law. During the online implementation, the actual initial state x(0) and the actual preview w are likely different from the nominal values used for the optimal control computations. Instead of recomputing the optimal control sequence, the control correction (<ref>) is computed, where δ x(k) = x(k) - x^o(k) and δ w(k) = w(k) - w^o(k) denote the state perturbation and the preview perturbation, respectively. Then the final control is used as u^*(k) = u^o(k) + δ u(k). This implementation is easily extended for the modified ENE. As seen from the steps discussed above, the proposed approach is easy to implement and involves negligible online computational cost. One can employ the nonlinear optimal control problem (<ref>) as the open-loop nonlinear model predictive control (NMPC) or the closed-loop NMPC. For the open-loop version, providing the N-length nominal trajectory from the NMPC, the ENE algorithm approximates the NMPC policy such that it calculates two time-varying N-length feedback gains on the state and preview perturbations. Although the feedback gains are pre-computed, the ENE is able to take feedback from the real system for the N predictions in contrast to the open-loop NMPC. On the other hand, for the colsed-loop NMPC, we save the ENE solution but we only apply the first control input to the plant at each time step. Taking the feedback from the real system, the ENE solution from the previous step is considered as the nominal non-optimal solution, and the ENE algorithm is applied again to adapt the recent solution for the current time step. In comparison with the existing NE frameworks <cit.>, we extend the regular NE approaches that only consider state deviations to a general setting that both state and preview deviations are considered. This is a significant extension as many modern control applications are employing preview information due to the increased availability of connectivity <cit.>. The necessity of adapting to preview perturbations is also demonstrated in our simulation studies, where we show that the proposed ENE can significantly outperform the regular NE when the preview information has certain variations. § SIMULATION RESULTS In this part, we demonstrate the performance of the proposed ENE framework for both small and large perturbations via a simulation example. The simulation example is adopted from the cart-inverted pendulum (see Fig. <ref>) whose system dynamics is described by: z̈=F-K_dż-m(Lθ̇^2sin (θ )-gsin (θ ) cos (θ ))-2w_z/M+m sin^2(θ ), θ̈=z̈cos (θ )+gsin (θ )/L-w_θ/mL^2, where z and θ denote the position of the cart and the pendulum angle. m=1kg, M=5kg, and L=2m represent the mass of the pendulum, the mass of the cart, and the length of the pendulum, respectively. g=9.81m/s^2 and K_d=10Ns/m are respectively the gravity acceleration and the damping parameter. The variable force F controls the system under a friction force w_z and a friction torque w_θ. T=0.1s is considered as the sampling time for discretization of the model (<ref>), and we assume that we have certain preview of w_z and w_θ. The states, the outputs, the preview information, and the control input constraint are respectively expressed as x=[x_1,x_2,x_3,x_4]^T=[z,ż,θ,θ̇]^T, y=[x_1,x_3]^T=[z,θ]^T, w=[w_1,w_2,w_3,w_4]^T=[0,w_z,0,w_θ]^T, -300≤ F≤ 300. The following values are used for the simulation: N = 35, x^o(0) = [0,0,-π,0]^T, w^o(0) = [0,0.1,0,0.1]^T. Moreover, the nominal preview model is represented as w^o(k+1) = -0.008 x^o(k) -0.1 w^o(k). For the small perturbation setting, the initial state perturbation and the actual friction profile are set as δ x (0) = [0.01,0.01,0.01,0.01]^T and w(k)=0.004 sin(k)+0.004 rand(k)+0.002, respectively. For the large perturbation setting, the initial state perturbation and the actual friction profile are chosen as δ x (0) = [0.2,0.2,0.2,0.2]^T and w(k)=0.015sin(k)+0.015rand(k)+0.01, respectively. Figs. <ref>-<ref> show the control performance of the open-loop NMPC, the standard NE, the ENE, and the closed-loop NMPC subject to the small perturbations. For the open-loop NMPC, under the nominal initial state x^o(0) and preview w^o(0), we obtain the N-length open-loop trajectory (x^o,u^o,w^o) and apply the open-loop control u^o to the system as shown in Fig. <ref>. It is worth noting that the state and preview information are updated during the optimization problem based on the considered nominal model (<ref>) and the nominal preview model w^o(k+1) = -0.008 x^o(k) -0.1 w^o(k), respectively. However, since it is the open-loop version of the NMPC, the controller does not take the feedback from the real states and preview, makes the least control force in Fig. <ref>, and leads to degraded performance due to the state and preview deviations as shown in Fig. <ref>. The NE is capable of taking the state feedback from the real system and adjusting the nominal optimal control, the open-loop control trajectory obtained by the NMPC, for the state perturbations. From Fig. <ref>, one can see that the NE does show an improved performance as compared to the open-loop NMPC but it falls short against the ENE since it only handles the state perturbations without adapting to the preview perturbations. In comparison with the open-loop NMPC and the NE, the proposed ENE takes the state and preview feedback from the real system and achieves better performance, where it promptly stabilizes the system with the minimum cost in the presence of state and preview perturbations as shown in Fig. <ref>. Although we employ the ENE for the open-loop NMPC, due to the feedback from the real system, the ENE shows a similar control performance as the closed-loop NMPC for this case as shown in Figs. <ref> and <ref>. However, the closed-loop NMPC has high computational cost since it solves the optimization problem (<ref>) at each step. Figs. <ref> and <ref> illustrate the control performance of the open-loop NMPC, the NE, the ENE, the modified NE, the modified ENE, and the closed-loop NMPC subject to large perturbations. As shown in Fig. <ref>, one can see that the considered large perturbations change the activity status of the input constraint, and it causes that the NE and the ENE violates the constraint due to the absence of the intermediate points between the nominal initial state and preview and the perturbed ones. However, the modified NE and the modified ENE satisfies the constraint, and the modified ENE indicates a similar performance as the closed-loop NMPC as shown in Fig. <ref>. Moreover, to see the role of the nominal preview model on the proposed control scheme, Figs. <ref> and <ref> compare the results of the ENE and the modified ENE for two nominal preview models w^o(k+1) = w^o(k) and w^o(k+1) = -0.008 x^o(k) -0.1 w^o(k) with the actual friction profile w(k)=0.008sin(k)+0.008rand(k)+0.004. One can see that the activity status of the constraint is changed under the considered perturbation; however, it is not high enough to cause the constraint violation for the ENE. Furthermore, it can be seen that both the modified ENE and the ENE accomplish better control performance when the preview model w^o(k+1) = w^o(k) is applied. Providing a suitable nominal preview model leads to well control performance by the proposed ENE and modified ENE. Table I compares the performances (i.e. ‖ y - r ‖) and the computational times of the proposed controllers for the small perturbation. Based on the formulations, it is obvious that the ENE and the modified ENE (MENE) show the same performance and computational time for the small perturbations. We also have same result for the NE and the modified NE (MNE) for the small perturbations. Table II compares the performances and the computational times of the proposed controllers for the large perturbations. In Tables I and II, the closed-loop NMPC (CLNMPC) and the open-loop NMPC (OLNMPC) show the best and the worst performance, respectively; however, considering both performance and computational time, the modified ENE presents the best results. The simulation setup is widely applicable as in many modern applications, a nominal preview model is available while the actual corresponding signal can also be measured or estimated online. For example, a wind energy forecast model is obtained using a deep federated learning approach <cit.>, which can be served as a nominal preview model, and the wind disturbance can also be measured using light detection and ranging systems in real time <cit.>. For the considered cart-inverted pendulum simulations, the nominal preview information is obtained using a nominal model, i.e. w^o(k+1) = -0.008 x^o(k) -0.1 w^o(k); however, for each time step k, we generate the real preview information as w(k) = 0.004 sin(k) + 0.004 rand(k) + 0.002, which leads to a perturbation from the nominal one. Providing a nominal solution based on the nominal state and preview, the proposed ENE framework adapts the nominal control to the perturbations generated by the measured/estimated real state and preview information. Furthermore, to simulate the large perturbation case, we follow the same process but change the real preview information as w(k) = 0.015 sin(k) + 0.015 rand(k) + 0.01 for Figs. 6 and 7 and w(k) = 0.008 sin(k) + 0.008 rand(k) + 0.004 for Figs. 8 and 9. § CONCLUSION In this work, an ENE algorithm was developed to approximate the nonlinear optimal control policy for the modern applications which incorporate the preview information. The developed ENE was based on the second-order variation of the original optimization problem, which led to a set of Riccati-like backward recursive equations. The ENE adapted a nominal trajectory to the state and preview perturbations, and a multi-segment strategy was employed to guarantee well closed-loop performance and constraint satisfaction for the large perturbations. Simulations of the cart inverted pendulum system demonstrated the ENE's technological advances over the NE and the NMPC. Additionally, it was demonstrated that the nominal preview model is crucial to the effectiveness of the ENE. The proposed ENE framework is applicable to general optimal control problem setting as there is no assumption on the under/over-actuation of the system. If a regular optimal control implementation can yield good performance, the ENE is expected to yield comparable performance with less computation complexity. The computational load of the ENE grows linearly for the optimization horizon, which alleviates the online computational burden and extends the applicability of the optimal controllers. The main contribution of this paper is mainly on the proposed new framework with control law derivations and analysis. The main purpose of the simulation is to demonstrate the effectiveness of the proposed framework by showing that the ENE is able to achieve improved performance (as compared to the open-loop NMPC and the standard NE) with negligible online computation (as compared to the closed-loop NMPC). The considered cart-inverted pendulum is a classical system frequently used for the nonlinear control benchmarks <cit.>. In our future work, we will evaluate the developed ENE framework on real-world physical systems such as robots and autonomous vehicles. Furthermore, we will consider the reference perturbation for tracking control problems and also develop a data-enabled ENE to remove the requirement of having an explicit system model. ieeetr
http://arxiv.org/abs/2306.11965v1
20230621013842
Symmetry making and symmetry breaking in cortex A collective portrait of ensemble excitation and inhibition
[ "Nima Dehghani" ]
q-bio.NC
[ "q-bio.NC", "cond-mat.dis-nn", "nlin.AO", "physics.bio-ph" ]
[email protected] McGovern Institute for Brain Research, MIT, Cambridge, MA Department of Physics, MIT, Cambridge, MA Creating a quantitative theory for the cortex poses several challenges and raises numerous questions. For instance, what are the significant scales of the system? Are they micro, meso or macroscopic? What are the relevant interactions? Are they pairwise, higher order or mean-field? And what are the control parameters? Are they noisy, dissipative or emergent? To tackle these issues, we suggest using an approach similar to the one that has transformed our understanding of the state of matter. This includes identifying invariances in the ensemble dynamics of various neuron functional classes, searching for order parameters that connect important degrees of freedom and distinguish macroscopic system states, and identifying broken symmetries in the order parameter space to comprehend the emerging laws when many neurons interact and coordinate their activation. By utilizing multielectrode and multiscale neural recordings, we measure the scale-invariant balance between excitatory and inhibitory neurons. We also investigate a set of parameters that can assist us in differentiating between various functional system states (such as the wake/sleep cycle) and pinpointing broken symmetries that serve different information processing and memory functions. Furthermore, we identify broken symmetries that result in pathological states like seizures. Symmetry making and symmetry breaking in cortex A collective portrait of ensemble excitation and inhibition Nima Dehghani July 31, 2023 ============================================================================================================= § INTRODUCTION Over the past two decades, advances in microelectronic fabrication techniques have made it possible to record the electrical activity of large populations of neurons both in vitro and in vivo. With the increasing size of these populations, researchers have shifted their focus from studying the responses of individual neurons to stimuli (such as in the seminal works of <cit.>) to examining population dynamics and correlations between neurons <cit.>, pairwise and/or higher-order correlations <cit.>. Despite the general agreement that information is represented and processed through correlated activity patterns in neuron populations, descriptive statistical models fail to provide significant theoretical insights into the collective behavior of these populations. As a result, attempts to find principles, while acknowledging high variability in spiking patterns and spontaneous activity as minimum theoretical constraints, adopt a top-down compartmentalization approach <cit.>. This approach follows Marr’s tri-level hypothesis <cit.> and separates the problem into a computational (behavioral) level, an algorithmic level, and a biological implementation level. Alternatively, some suggest a quantitative computational framework that starts from the tri-level approach and adds further requirements such as hierarchical representation and learning <cit.>. However, these approaches have their limitations. Their assumption of separation of scales is at odds with the nature of complex systems <cit.>. To describe hierarchical representation that matches the nature of the cortex as an adaptive physical system, a successful theory must account for the interaction between sub-assemblies while disregarding insignificant details in transferring dynamics between observation scales <cit.>. Given the challenges in modeling the activity of large neural populations, it is crucial to search for signatures that can guide us in formulating an effective theory that describes the macroscopic characteristics of the system, despite the high degrees of freedom at the microscopic scale of individual neurons. As the number of measured neuronal activity increases linearly, the number of interactions among neurons grows exponentially, making it difficult to obtain a representative sample of the full distribution of activity patterns through correlation measures or dimensionality reduction of population dynamics. This is where physics can help if we look for collections of patterns and symmetries to find order parameters and broken symmetries that point to underlying laws <cit.>. Instead of solely focusing on cognition or animal behavior as metrics for describing the system, we should identify invariances in ensemble dynamics, search for order parameters that link important degrees of freedom that differentiate macroscopic states of the system, and identify broken symmetries in the order parameter space to understand the laws that emerge when many neurons interact and orchestrate their activation. Recently, we have shown invariance of E:I balance across scales <cit.>. However, in searching for the right order parameters, we have also found that a maximum entropy model of weak pairwise interactions (Ising model) provides a good fit for inhibitory neurons but fails to describe the spiking patterns of excitatory neurons <cit.>. The importance of diversity in computing node classes becomes evident when E:I balance breakdown leads to seizure, a distinctive pathological state where information processing capacity is near zero while macroscopic neuronal activity shows high levels of synchronicity <cit.>. Additionally, thermodynamic-based measures of population activity (e.g. heat capacity) fail to differentiate macroscopic states of the system (i.e. awake, light sleep, deep sleep) <cit.>. Here, we extend these studies to further investigate multiscale properties, invariance of E:I balance, search for order parameters that differentiate states of the system, and examine the link between microscopic and macroscopic features of pathological symmetry breaking during seizure. § RESULTS Understanding cortical neural activity presents the challenge of dealing with the multiple spatial and temporal scales inherent in neural dynamics. This multiscale neural activity exhibits behaviors that resemble state transitions in other physical systems with dynamics that transcend many different length scales <cit.>. The renormalization group provides a powerful framework for studying symmetries and changes in a physical system as viewed at different scales <cit.>. In the context of phase transitions, renormalization group theory is based on the idea that only slow modes are responsible for the phase transition, and phase transitions are closely tied to long-range behavior <cit.>. The defining property of criticality, according to renormalization group theory, is that the characteristic length scale of the structure of the physical system, also known as the correlation length, becomes infinite <cit.>. Similarly, large ensembles of neural events provide data for describing long-range coupling in the temporal domain, comparable to spatial coupling in phase transitions. Information in the system is conveyed by the temporal order of neural packets and different collective states emerge as the functional network changes. Thus, our study relies on multiscale dynamical features of ensemble E:I balance across different states of the wake/sleep cycle to better understand collective neural activity (see Fig.<ref> and Fig.<ref>). §.§ Multiscale Signatures of Balance Invariance §.§.§ Excitation-Inhibition Balance Across Temporal Scales Recently, we have shown that across all different states of the wake-sleep cycle, excitatory and inhibitory ensembles are well balanced, and co-fluctuate with slight instantaneous deviations from perfect balance, mostly in slow-wave sleep. Remarkably, these correlated fluctuations are seen for many different temporal scales. The similarity of these computational features with a network model of self-generated balanced states suggests that such balanced activity is essentially generated by recurrent activity in the local network and is not due to external inputs <cit.>. To further examine invariance, we tested balance across multiple temporal scales ranging from 1 millisecond to 10 seconds (see Fig.<ref> inset). When the excitatory and inhibitory ensemble activities are normalized, they appear as scaled mirror images of each other. This invariance was observed in both awake and different sleep states (i.e. slow-wave sleep, light sleep, and rapid-eye movement sleep). As an example, a multi-step zoom across scales (Fig.<ref>) shows that when the firing of a given functional category is normalized to the total related ensemble firing power, Excitation and Inhibition mirror each other’s fluctuations en-large. It is important to note that with any current technology for ensemble recordings, we are dramatically subsampling the possible space of neurons and can only extract estimates of balance rather than its exact magnitude. The true scaling of balance could only be calculated if activity of all excitatory and inhibitory neurons were measured, a challenge not met with current technology but perhaps achievable in the far future. §.§.§ Scale-Invariance in Correlated Fluctuations of Excitation and Inhibition in the Cerebral Cortex A common pattern we observe is that the ensemble E-I distribution’s standard deviation (and variance) shrinks around the fixed mean as we go from finer to coarser scales (see Fig.<ref> panels and Fig.<ref>A). If we look at the variance around the mean (i.e., coefficient of variation) for the entire recordings, we observe a symmetric decrease of coefficient of variation across scales (Table<ref>). This symmetry is lost when we randomize and disrupt the temporal relation between excitatory and inhibitory ensembles (Fig.<ref>B,C). This symmetric change of excitatory-inhibitory fluctuations at different scales is linked to a unidirectional symmetry of excitatory-inhibitory ensemble pairs that align with the ideal line of balance (the diagonal in Fig.<ref>A represents the ideal line of balance). The dispersion from the diagonal line of balance shows a balanced instantaneous dominance of excitatory or inhibitory systems. This constitutes as an evidence for scale-invariance in terms of correlated fluctuations of excitation and inhibition in the cerebral cortex. Across sleep-wake cycle, a clear symmetry is observed across multiple scales. The Mean absolute value deviation (MAD), 1/n∑_i=1^n|x_i-X|, (a statistical measure of dispersion) for different states for awake, slow-wave sleep (SWS), Light sleep (LSL) and Rapid-eye movement (REM) had similar structure and statistical dispersion (see Table<ref>), We measured the dispersion, which is the distance of each point from the diagonal line of equal excitatory and inhibitory activity, in a scatter plot of the ratio of excitatory and inhibitory activity for each pair of neurons. The dispersion indicates how much one functional system (Excitation or Inhibition) prevails over the other. We found that the dispersion decreased across all states (awake, light sleep and slow-wave sleep) as we increased the timescale of our analysis, meaning that the excitatory and inhibitory activity became more balanced. However, the shape of the scatter plot remained similar even at very high timescales, meaning that the excitatory and inhibitory activity kept its features at coarser temporal scales. In both slow-wave sleep and wakefulness, the two systems have a symmetric distribution of magnitude and frequency of the ensemble fraction of excitation and inhibition. The cross-subject average skewness (which is a measure of how asymmetric a distribution is) of excitatory/inhibitory dominance, for different states across the wake-sleep cycle were close to zero (indicating perfect symmetry). For details, see Table.<ref>. This symmetry indicates that the balance is maintained across the scales. As expected, as we move to the coarse temporal scales, the distribution narrows while still keeping the E:I balance. The balance of excitatory and inhibitory neurons seems to be a key feature of cortical computation, but there is a wide parameter space where input can be computationally processed while preserving balance <cit.>. In fact, it has been shown that spike timing is finely tuned by dynamic interactions between excitatory and inhibitory conductances <cit.>. Some simulations have even suggested that the spatial (proximal vs distal) distribution of inhibitory synaptic activity on pyramidal cells keeps the system more under the control of inhibition but in an excitable mode waiting for the next burst of excitation from the thalamus <cit.>. §.§.§ Disrupting Temporal and Spatial Structure of Ensemble Spike Patterns through Randomization We examined how randomizing the spike patterns of the neurons affected the multiscale balance of excitatory and inhibitory activity. We used two different randomizations: one that permuted the inter-spike intervals in the ensemble series, and one that circularly shifted the inhibitory and excitatory ensembles (for details, see the methods section). Both randomizations broke the alignment of excitatory-inhibitory pairs along the diagonal line, which indicated the balance between the two systems. Instead, randomizations transformed the data to a circular cloud, which indicated noise. This resultant “sphering” is similar to a “whitening” transformation because it turned the input vector into a white noise vector, with no remaining correlation or structure, as shown in Fig.<ref>A2,A3. Note that the linear whitening transforms (such as Cholesky, PCA or ZCA) assume that the input vector has a multivariate normal distribution and use a linear transformation to make the covariance matrix an identity matrix. In contrast, nonlinear whitening transforms (implemented through neural networks, kernel whitening or ICA) can handle non-Gaussian distributions and use a nonlinear transformation to remove higher-order dependencies. Each of these randomizations were designed to achieve a specific decorrelation objective. In “random permutation”, we first calculated the inter-spike interval (ISI) for the pooled ensemble series of each (excitatory or inhibitory) functional category, and then randomly shuffled the ISI for that category. We then reconstructed a new temporal order of ensemble spikes by cumulatively summing the shuffled ISI. This process preserves the number and distribution of spikes, but changes their arrangement within the series. In “circular shift of spike ensemble”, we first calculated the inter-spike interval (ISI) of each unit’s spike series, and then shifted the spikes of each unit by a random value between 1 and the maximum ISI of that unit (Note, here by unit we are referring to a single neuron). The randomized units were then aggregated to create the randomized ensemble series of each (excitatory or inhibitory) functional category. This process preserves the internal spike timing of each unit, but disrupts the timing between units in that category. The first method, more or less, acts like a linear whitening since it only disrupts the temporal order of aggregate spikes with respect to each other. In contrast, the later method, acts closer to a nonlinear whitening since it decorrelates the ensemble through disrupting inter-unit timing of spikes, thus removing higher-order correlations. Regardless, both methods of randomization disrupt the temporal or spatial structure of the ensemble spike patterns of the neurons. §.§.§ Limitations of Current Experimental Techniques It is worth mentioning that the multielectrode array samples spiking activity from an area about the size of a cortical column using a 10x10 electrode grid. Although the density of neurons in a cortical column varies across species <cit.>, the Utah array and other current experimental techniques massively subsample from the neurons in a cortical column, estimated to range between 10,000 to 20,000 per column <cit.>. As the ability to record electrical activity of neurons scales up, we will have the opportunity to search for translational symmetry of excitatory-inhibitory balance across many neighboring columns. Given the underlying functional anisotropy at different spatial scales, such as our prior observation of stereotypical directionality of local field potential traveling waves at the scale of a cortical column <cit.> and in theoretical models of propagating waves across much larger patches of cortex <cit.>, the existence or lack thereof of ensemble excitatory-inhibitory translational symmetry will guide us in developing a formalism that could link micro and macroscales. §.§ Universality of Balance §.§.§ Collapse Curve Analysis of Ensemble Excitation-Inhibition Fluctuations We then investigated multiscale properties of fluctuations of cortical ensemble excitation-inihibition through rescaling. We constructed a collapse curve analysis to reveal the self-similar or self-affine properties of these fluctuations across different time scales and different states of wake-sleep cycle. A collapse curve can be obtained by rescaling the axes of different curves of a physical quantity measured at different values of a control parameter, such that they collapse onto a single curve. This indicates that the system exhibits some kind of universality or self-similarity. We applied this method to the observed fluctuations of ensemble activity, represented by f_obs=Ens_E - Ens_I, where Ens_E and Ens_I are the ensemble activities of excitatory and inhibitory neurons, respectively. After sorting f_obs in ascending order, we add up all the values from the smallest to the largest, resulting in a symmetric parabolic curve that reflects the overall balance of ensemble excitation and inhibition (Fig.<ref>). To test if the system has self-similar or self-affine properties, we rescale the curve for each time scale so that it fits between 0 and 1 (x-axis normalization) along with normalizing the y-axis by the product of the number of bins. This would allow a reliable comparison of timeseries of different lengths (for example a longer awake versus a shorter REM recording). We then systematically identify non-collapsible regions for various comparisons:i) data against random series, ii) comparison of different scales, and iii) comparison of different states across various scales. As shown in Fig.<ref>A, surrogates obtained from various randomization methods behave similar to each other and do not collapse on the data curve. This observation further signifies that the observed balance is not merely a meaningless statistical attribute of the recorded timeseries but rather it is related to a collective pattern of ensemble spiking. Comparing different scales, Fig.<ref>B, shows that at coarsest scale, surrogate collapse curve approaches the data collapse curve. Therefore, we can deduce that at the very long timescales, while balanced excitation and inhibition is still a property of the system, the packet contents may no longer hold any meaningful structure from an information processing point of view. Extending this analysis to comparison of different states across the wake-sleep cycle, shows that (more or less) different states collapse onto the same regime Fig.<ref>C, further emphasizing the presence of an invariant balance across scales. On a corollary note, we wish to emphasize that scaling ansatz can help to reveal the universal features of a system and to derive scaling relations among different observable. When different curves of a physical quantity measured at different values of a control parameter can be collapsed onto a single curve by rescaling the axes, this behavior is a reflection of scaling behavior of the system near a critical point. Therefore the scaling collapse that we examined here threads with other attributes of collective behavior, namely neural avalanches. Neural avalanches are spatiotemporal patterns of neuronal activity that occur spontaneously in cortical networks and exhibit scale-free behavior, meaning that they lack a characteristic size or duration. Neural avalanches are often considered as a signature of criticality in the brain, a state where the system is poised at the edge of a phase transition between order and disorder. Criticality is thought to confer optimal computational properties to the brain, such as high sensitivity, dynamic range, and information processing capacity. However, there is an ongoing debate about whether neural avalanches are truly indicative of criticality or are simply a consequence of other factors, such as network topology, synaptic plasticity, or external inputs. Through rigorous statistical measures, we have revealed that neural avalanches are not well described by self-organized criticality (SOC) <cit.>, which is a mechanism by which systems can spontaneously tune themselves to criticality without external tuning parameters. We argued that SOC models fail to account for the role of inhibition in shaping neural avalanches and for the diversity of spatiotemporal patterns observed in cortical networks. In addition, we have also shown that maximum entropy models, which are statistical models that capture the pairwise correlations between neurons while maximizing their entropy, are not good at capturing the intricate dynamics of excitation and inhibition <cit.>. Perhaps together, the scaling collapse that we report here along with the differential behavior of Excitatory and Inhibitory populations that we have reported in the past, reveal a different type of criticality in cortical networks. This means that the system exhibits scale-invariant behavior over a wide range of parameters, rather than at a single critical point. This could be due to the presence of multiple competing interactions or feedback mechanisms in cortical networks, which could create multiple phases or coexisting phases near criticality. §.§.§ Partition Curve Analysis of Neuronal Firing Renewal A Poisson process is a simple way to describe neuronal firing as a continuous-time Markov process with independent and identically distributed holding times where the inter-arrival times are exponentially distributed <cit.>. However, this model cannot accurately describe realistic interspike interval distributions because it does not account for neuronal refractoriness, which can lead to periods of time during which a neuron is unable to fire another spike <cit.>. This can result in gaps in the ensemble spike timeseries, making it difficult to accurately model the behavior of the ensemble. Refractoriness, particularly during in vivo high conductance state, plays a significant role in the variability the observed neuronal responses <cit.>. To account for refractoriness, and the resultant neural response variability in the stochastic description of spike trains, filtering or more sophisticated models such as renewal or Hawkes processes can be used <cit.>. Each of these methods have certain drawbacks; for example, filtering can remove short-term fluctuations but also introduces bias. Renewal theory extends the Poisson process to include arbitrary holding times and is an idealized model for random events, with the assumption that the times between events are independent and identically distributed. Renewal processes keep a memory of the last event but not of any earlier events and can be used to model ensemble spiking patterns by taking into account the time since the last event. However, renewal processes assume that inter-arrival times are independent and identically distributed, which may not hold for nonstationary data that often exhibits trends or other patterns that violate this assumption <cit.>. Spike trains of typical neurons in vivo show irregular behavior, with large variability in interspike intervals and spike counts <cit.>. Whether this irregularity is due to thermal noise, microscopic chaos, or an intricate neural code is still an open question <cit.>. Apart from intrinsic noise sources at the level of an individual neuron, there are also extrinsic noise sources due to signal transmission and network effects (extrinsic noise). Variability can be quantified by the coefficient of variation value of interval distributions <cit.>. In neurons with strong adaptation, interspike intervals are not independent rendering the renewal theory as a weak model. A time-dependent generalization of renewal theory can be used to account for non-stationary real-world stimuli with a strong temporal component. Hawkes processes, which are self-exciting and can model refractoriness, can be used for both stationary and non-stationary data, but there are restrictions on their use for non-stationary data. If the Hawkes process intensity function varies with time, the model is no longer stationary <cit.>. To examine the temporal variation of the probability of an event occurring at a given time we need new methods. We present a method to analyze the distribution of ensemble activity in a given functional category using a partition curve. The partition curve represents a size-based distribution that indicates the percentage of ensemble activity of a certain size or in a certain size interval. The shape of the partition curve can provide information about the equivalence or unequivalence of the distribution of ensemble activity. This method, in conjunction with Hawkes theory, could potentially be used to analyze ensemble spiking patterns in neurons. Analysis of the distribution of ensemble activity using a partition curve can help us to gain a deeper understanding of the relationship between ensemble spiking patterns and renewal processes. Figure <ref> shows the probability–probability plot comparing the distribution of a variable against a hypothetical uniform distribution of that variable. A partition curve is a graphical representation of the distribution of ensemble spike firing rate that plots the cumulative percentage of the variable against the cumulative percentage of the population. It represents a size-based distribution that indicates the percentage of ensemble activity of a certain size or in a certain size interval. The partition curve can be used to measure variability, with the distance from the diagonal line indicating the degree of unequivalence in the distribution of ensemble activity. The partition curve is convex and lies below the diagonal (45 degree) line which represents equivalence of normalized ensemble activity across time. The closer the convex partition curve is to the diagonal line, the lower the ensemble activity unequivalence. The further away the partition curve is from the 45 degree diagonal line, the greater the unequivalence in the distribution of ensemble activity. The partition curve shows that spike firing rates are not evenly distributed and exhibit a large amount of variability in firing rate. It can be calculated for nonstationary data (using a moving window) and used to compare the distribution of the variable over time. The partition function is a measure of the volume occupied by the system in phase space and can be helpful for identifying trends in the distribution of the variable and understanding how it is affected by changes in other variables. Color-coded curves in Fig.<ref> show the partition curves for various scales (darker colors represent finer scales while brighter colors represent coarser scales). With increasing coarse-graining, the partition curves approach closer to the diagonal and random partition curves but always maintain the characteristic non-equivalence property. The results show that spike firing rates are not evenly distributed and exhibit a large amount of variability in firing rate. With increasing coarse-graining, the partition curves approach closer to the diagonal and random partition curves but always maintain the characteristic non-equivalence property. This transition represents how many microstates are accessible to the system in a given ensemble under a given coarse-graining. The black partition curve is calculated from a normal distribution for comparison. This curve stand clearly separate from the partition curves of the collected data, representing that the observed behavior is not a result of a random process. §.§ Symmetry Breaking §.§.§ Symmetry breaking in wake/sleep states As we move towards more coarse-grained scales, the joint distribution of coarse-grained variables tends to become simpler. This is because macroscopic behaviors, which are observed at coarser scales, are often simpler and more universal compared to their microscopic counterparts. The “fixed point of the renormalization group” refers to a state where a system remains unchanged (or invariant) under the renormalization group transformation. This property of invariance under transformations is considered a fundamental test for criticality, and believed to hold significance beyond just thermodynamic analogies <cit.>. Integrating out high-momentum degrees of freedom and rescaling the remaining one can simplify the study of systems that exhibit behaviors on many different length scales (as shown here, and previously <cit.>).However, simplicity at macroscale should not be confused with reducibility to purely mechanistic microscopic phenomena. In the context of cortical computation, major functional states arise through interaction of numerous excitatory and inhibitory neurons. An understanding of dynamics and information processing can be achieved through the study of symmetry breaking, which can be understood as the deviation from a balanced state of excitation and inhibition. Both renormalization and symmetry breaking could potentially be observed and studied using multiscale co-occurrence feature analysis of EIS (excitation/inhibition/scale) space. To analyze the properties of the joint probability of the fraction of Excitation and Inhibition across many temporal scales, we implemented a multidimensional feature analyses. From the multidimensional space of excitation/inhibition/scale (EIS), we calculated a “Multiscale Normalized Co-occurrence Matrix” (MNCM) for multiple distances and orientations following the volumetric adaptation <cit.> of textural analysis <cit.>. MNCM was created by calculating the frequency of occurrence of pairs of neighboring discrete points (pixel) in the multidimensional EIS space. The spatial relationship between the pixels can be defined using an “Offset” parameter (where the offset relates to the spatial co-occurrence of E vs I in the EIS). Each entry in the MNCM indicates the number of times a pixel with value i is adjacent to a pixel with value j. This spatial dependence matrix provides the distribution of similar values (of E and I) at given offsets (i.e. the number of times that a given value of ensemble E co-occurred with a given value of ensemble I, considering the scale). We then used MNCM to measure features of distinct properties, such as contrast category (contrast, homogeneity, dissimilarity), descriptive measures (mean, variance, correlation), and orderliness features (maximum probability, entropy, angular second moment or energy). The multiscale co-occurrence matrix is analyzed using three groups of measures: the Descriptive Statistics Group, the Contrast Group, and the Orderliness Group. Each group captures different aspects of the matrix. The Descriptive Statistics Group uses statistical metrics to describe the distribution of point intensities in a given scale, including a measure of correlation between specified pairs. The Contrast Group measures variation in intensity within a scale, including homogeneity, contrast, and dissimilarity. The Orderliness Group measures order or regularity within a scale, including energy and entropy. These features are computed for each scale independently and then combined into a single feature vector representing the entire multidimensional space. Statistical descriptions of these feature vectors show marked state-dependent differences across these groups of measures (see Fig. <ref>). For example, the multiscale energy of the system distinguishes REM from awake state, while contrast is maximally distance-dependent across domains in light sleep (LSL) and deep sleep (SWS). These calculations also show that homogeneity across multiple scales is more prominent in wakefulness and less correlated with distance in deep sleep. High contrast indicates a large amount of variability or differences in the distribution of excitatory and inhibitory activity. If contrast is distance-dependent across domains in light sleep and deep sleep, this could suggest spatial symmetry breaking – different times of sleep (like UP or DOWN states in SWS) may exhibit different balances of excitation and inhibition, perhaps due to differentially tuned localized neuronal dynamics or inputs (Note that distant-dependence here refer to the pair values in the EIS). Energy and Entropy show opposing features of the wake-sleep cycle. Energy, measure of uniformity or orderliness, is higher in awake or REM, states that have more asynchronus activity. The distinction of REM from awake state by the multiscale energy might reflect a state-dependent shift in the balance of excitatory and inhibitory activity – a form of temporal symmetry breaking - that is unique to information processing in dream versus wakefulness. Entropy on the other hand is higher in SWS and LSL with REM showing the least levels. This could be due to the various dissociation of cortical events from thalamic events during non-awake states. Homogeneity and Correlation showed less distinctive variations across different states. Homogeneity, as a measure of the closeness of the distribution of elements in the MNCM to the MNCM diagonal, increases with the uniformity of excitatory and inhibitory activity. The prominence of homogeneity across multiple scales in wakefulness and its lesser correlation with distance in light and deep sleep may reflect a more detailed balance of excitation and inhibition during wakefulness and more localized dynamics (LSL) yet more phasic undulation during sleep. periods of intense neural activity and silence (UP and DOWN states) that characterize non-rapid eye movement (NREM) sleep. State-dependent differences in the balance of excitation and inhibition, and thus the degree and nature of symmetry breaking, are suggested by the observed differences across these measures. This could be due to changes in factors such as neuronal firing rates, synaptic strengths, or neuromodulator levels across different states of wake-sleep cycle. These distinctive multiscale properties may result from intrinsic computational characteristics of asynchronous irregular (AI), synchronous regular (SR), and asynchronous regular (AR) regimes of stochastic network states <cit.>. Further modeling and theoretical studies are needed to clarify the links between stochastic network states and multiscale features of excitatory-inhibitory balance discussed here. The balance of excitation and inhibition in the system, and hence the symmetry of the system, may be dynamically modulated across different sleep states, possibly as a mechanism for achieving different functional outcomes or maintaining homeostasis. The state-dependent changes in multiscale co-occurrence features may reflect these underlying processes. Further research, including a more detailed analysis of spatial and temporal correlation with a higher number of recorded neurons, is needed to test and refine this method. §.§.§ Symmetry Breaking and Balance Reset in Pathological States: Insights from Seizures Our recent study showed that the balance of excitation and inhibition breaks down during electrographic seizures <cit.>. During a seizure, a shift in excitatory and inhibitory firing can be observed (Figure <ref>A1,A2), along with significant fluctuations in the oscillatory contents of the local field potential (LFP), in addition to these spiking variations. We observed that the average wavelet coherence displays an increase of high frequencies during a seizure, followed by a return to lower frequencies. These changes in LFP oscillations reflect previously identified frequency features of seizures (in both electrocorticogram (ECOG) and LFP recordings) <cit.>. LFP comprises multiple oscillatory patterns simultaneously; simply focusing on the wavelet modulus's maxima doesn't truly encapsulate the interaction of oscillations and their alterations during seizure. Wavelet coherence is used to establish the relation between the frequency structure of the LFP and the event of interest <cit.>. The wavelet cross-spectrum is defined as the product of the wavelet transform of a signal and the complex conjugate of the wavelet transform of signal. Wavelet coherence is then defined as the squared absolute value of the smoothed cross-spectrum normalized by the smoothed individual power spectra from individual recording channels of the multielectrode array.By applying a “multiridge” detection, strong components across multiple concurrent frequency ranges of interest can be extracted. Normalizing the average band ridges across frequency bands illustrates the abrupt frequency shift around the seizure and indicates potential frequency changes over larger time scales. The crazy-climber algorithm, a stochastic relaxation method based on Markov chain Monte Carlo (MCMC), is used to extract ridges from the wavelet <cit.>. This method makes use of the energetic distribution in the time-frequency representation of the LFP to extract multiple ridges across various frequency ranges. Figure <ref>A shows these multiscale LFP coherence ridges. These multiscale LFP coherence ridges show a sharp transition at the time that the ensemble excitatory and inhibitory units undergo a major fluctuation, indicating the onset of a seizure. Local field potentials (LFPs) are slow fluctuations of the extracellular electric potential within brain tissue, arising from the combined electrical activity of a large population of neurons in a localized region <cit.>. LFPs are influenced by the synchronous input and output activity of many neurons, including synaptic potentials and other slower transmembrane current flows <cit.>. LFPs often reflect local information about processing in a given brain region. However, the amplitude of the LFP is not a simple linear sum of synaptic activity, as the extracellular space acts as a frequency-dependent filter for the LFP. This means that the LFP is a filtered version of synaptic activity, with higher frequency components attenuated more than lower ones <cit.>. This has implications for interpreting LFP measurements, as the LFP does not directly reflect raw synaptic activity. To address this issue, we also examined the synaptic current estimated from the spiking neurons. Synaptic currents are calculated by convolving individual spike trains with an exponential kernel and normalizing based on the neuron count in each functional category. The convolution process involves weighting each spike with an exponential kernel, where weights decrease exponentially as the time difference between the spike and the current time increases. After convolution and normalization, the estimated synaptic current is obtained by subtracting the ensemble of inhibition from the ensemble of excitation. This estimated synaptic current represents a measure of the balance of excitation and inhibition in the neuronal population, with positive values indicating a net excitatory influence and negative values indicating a net inhibitory influence. A heatmap of synaptic current, estimated from normalized ensemble spiking, reveals a multiscale signature of pathological symmetry breaking during seizure (Fig.<ref>B). Two distinct bands of maximum difference, corresponding to frequency perturbations observed in LFP oscillations ( <ref>A), delineate the temporal zone of the seizure. The first band represents a clear breakdown of balance, followed by another significant perturbation, possibly signifying an attempt to restore order. The breakdown of balance during seizure is followed by another interesting phenomenon. A rasterplot of excitatory and inhibitory unit activity during a 540-second epoch shows the multiscale features of the imbalance leading to a seizure (Fig.<ref>A1). Multiscale refers to changes and effects observable at various time (fine to coarse-grained scales) or spatial scales (spiking and LFP). The functional restoration after a seizure points to the reset of multiscale balance, as shown by the return of normalized cumulative activity to the balance trajectory towards the end of the epoch. The extent of this imbalance is marked by the deviation of the actual behavior (red line) from a projected balanced path (green line). This reset is compared with simulated timeseries representing stochastic (difference-stationary process) and deterministic (trend-stationary) processes, neither of which shows a return to the balanced state as observed in the real data. These results suggest that the brain has multiscale adaptive features that help it return to a balanced state after a seizure. This behavior was observed in N=6 seizure recordings that we had from these patients. Though technically challenging, further studies with many more seizures are needed to verify these properties. Whether the balance breakdown and reset extends to other types of seizure (such as non-local) is an interesting question that needs future testing. § DISCUSSION The macroscopic behaviors of a system, which are observed at larger scales, can often be described by simpler and more universal principles than the microscopic mechanisms that govern the behavior of individual components of the system <cit.>. Neural field models and neural mass models are different ways to model how neurons act together. Neural field models account for the spatial distribution of neuron groups in a coarse-grained area <cit.> and explain how a variable that captures neural activity changes over both space and time <cit.>. Neural mass models only track activity over time <cit.> and do not account for spatial extensions. Both of these field models (i.e. neural mass model and neural field model) use the same principles of population coding and time-graining <cit.>. Mean-field models have limitations in understanding critical dynamics because they don’t account for the interaction between degrees of freedom at different length scales. These models assume fluctuations are Gaussian and are limited to studying the linear response to stimuli. They neglect nonlinear interactions, which are essential for computation at macroscopic scales. To fully understand brain dynamics and its various types of criticality, more advanced methods beyond mean-field approaches are necessary <cit.>. A solution is to use renormalization group that works by repeatedly integrating out high-momentum degrees of freedom and rescaling the remaining degrees of freedom <cit.>. In this context, “degrees of freedom” refers to the number of independent ways that the components of a system can change, while “high-momentum degrees of freedom” often refer to the fast, microscopic details of the system, and “Rescaling” means adjusting the system’s description to focus on larger-scale or slower aspects. The significance of the invariance of probability distributions under repeated coarse-graining, also known as a fixed point of the renormalization group, is considered a fundamental test for criticality <cit.>. It suggests that when a system exhibits this invariance, it may be in a critical state, indicating its broader applicability of RG and relevance in understanding complex systems. For example, phenomenological renormalization group (PRG) can used to analyze single-neuron recordings (from a mouse’s hippocampus), revealing some scaling characteristics remain in the supercritical regime for a finite system <cit.>. In this study, we scrutinized the ensemble excitation and inhibition using a renormalization group transformation. This process involved eliminating high-momentum degrees of freedom, followed by rescaling the remaining degrees of freedom. Subsequently, we conducted multiple tests to assess the invariance properties of the system. Our findings indicate a consistent decrease in the variance of ensemble Excitatory-Inhibitory (E-I) as we move from finer to coarser scales, highlighting scale-invariance in the correlated fluctuations of excitation and inhibition in the cerebral cortex. During the sleep-wake cycle, a balance between excitatory and inhibitory activity was observed and the balance was preserved across different scales and states. The EI balance, integral to cortical computation, remains intact even as the distribution narrows with coarser temporal scales. Furthermore, past studies have shown that the timing of spikes is delicately adjusted by dynamic interactions between excitatory and inhibitory conductances <cit.>. The balance between excitatory and inhibitory conductance helps to regulate the timing of the action potentials <cit.>. This fine-tuning is crucial for all sorts of neural processes. Some simulations and experimental studies suggest that the spatial distribution of inhibitory synaptic activity on pyramidal cells predominantly controls the activity of the system, maintaining it in an excitable state, poised for the next wave of thalamic excitation <cit.>. So in essence, the balance of excitatory and inhibitory influences leaves the cortex in a state where it's ready to respond quickly and appropriately to new information coming in, indicating that the balance is a key feature of cortical computation. It's important to note that there is a wide parameter space where balance can be maintained while computational processing of inputs occurs <cit.>. We utilized collapse curve analysis to examine the multiscale attributes of cortical ensemble excitation-inhibition balance fluctuations throughout various wake-sleep cycle stages, offering insights into the system's self-similarity or universality. Significantly, various randomization methods resulted in surrogates that displayed similar behavior and didn't collapse onto the data curve, underscoring that the observed balance isn't simply a random statistical characteristic, but instead is connected to a collective ensemble spiking pattern. As the analysis reached the coarsest scale, the surrogate collapse curve started to align with the data collapse curve, implying that although the balance remains, the packet content might forfeit significant structure from an information processing standpoint at very extended timescales. Moreover, different states appeared to collapse onto the same regime, highlighting a consistent balance across scales. This exploration of scale-invariant behavior within cortical networks provides a fresh viewpoint on criticality, proposing that the system exhibits this behavior across a vast range of parameters due to potential competing interactions or feedback mechanisms in cortical networks, as opposed to at a single critical point. As we have suggested before, this could be due to multiple interacting factors or feedback mechanisms in the cortical networks <cit.>. We also developed “partition curve” method for analyzing the distribution of ensemble activity. The partition curve offers a way to assess ensemble activity distribution based on size or intervals. Notably, the curve can shed light on the level of equality, or lack thereof, in ensemble activity distribution. When used in conjunction with Hawkes theory, this tool could aid in the exploration of neuron ensemble spiking patterns. A noteworthy finding from our partition curve was the significant variability in spike firing rates, suggesting a non-uniform distribution. In the case of nonstationary data, the partition curve can monitor temporal changes. Further analysis comparing our findings with a normal distribution supported the conclusion that the patterns observed were not simply the result of random processes. This implies that the patterns and distributions identified in the neuronal ensemble activity are potentially meaningful and could provide valuable insights into how these neurons function as a group. At coarser scales, the partition curves appeared to converge towards the diagonal and random partition curves, whilst preserving a unique non-equivalence attribute. This shift signifies the spectrum of microstates - or short-term, specific states of activity - that the system can access within an ensemble at distinct coarse-graining levels. In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system. The specification of all microstates of a system is a point in the system’s phase space. Entropy, in both thermodynamics and information theory, is a measure of the number of possible microstates or configurations that a system can have. In this sense, there is a connection between degrees of freedom, states of the system, and information (in terms of entropy). The more degrees of freedom a system has, the more possible microstates it can have, and the higher its entropy will be. This means that system at finer scales, with more degrees of freedom, can contain more information. We also introduce multiscale descriptors that examine degrees of freedom, rescaling, and the simplification of joint distributions under coarse-graining, particularly in the context of criticality and macroscopic behavior. These features, computed from a Multiscale Normalized Co-occurrence Matrix (MNCM), are descriptive statistics that can capture different aspects of the spatial distribution of intensities in the multidimensional landscape of excitation/inhibition/scale. Specific attributes, like correlation, contrast, homogeneity, energy, and entropy, offer unique insights into the system's behavior. This reduces the complexity of the multiscale data to a set of numbers that describe key aspects of its structure. The MNCM analysis in this study is applied across multiple temporal scales to shine light on the collective behavior across scales, effectively quantifying “the degrees of freedom” of the system as it undergoes rescaling. Specifically, multiscale energy, indicating uniformity, differentiates REM sleep from wakefulness, suggesting unique shifts in neural activity balance. Contrast, which measures variability, exhibits a strong distance-dependency in light and deep sleep, hinting at spatial symmetry breaking. Homogeneity, indicating uniform activity, is pronounced in wakefulness and less distance-correlated in deep sleep, indicating intricate neural balances and localized dynamics, respectively. Entropy, measuring disorder, is highest in deep and light sleep and lowest in REM, reflecting the dissociation of cortical and thalamic events in non-awake states. In biological neuronal networks, symmetry breaking might be seen in the transition from balanced, coordinated activity to a state where certain neural pathways are favored over others. The MNCM could potentially capture such transitions by quantifying the joint probabilities of different patterns of excitation and inhibition and discern state-dependent differences in neural activity. §.§ Concluding Remarks In his seminal paper More is different, Philip Anderson speculated that “internal structure of a physical matter does not need to be symmetrical even if total state of it is” and “the state of a really big system does not have to have the symmetry of the laws which governs it” <cit.>. He argues that a key feature in biological systems may be the existence of temporal ordering and periodicity in addition to their broken symmetry. We suggest that the nervous system resides at a unique position in the abstract computational state space: while the overall ensemble excitatory-inhibitory balance is both scale-invariant and state-independent (wakefulness, REM, or NREM), at any given time t, the momentary dominance of either excitatory or inhibitory neurons is more pronounced at finer time scales and the degree to which this happens is state-dependent. As we move towards coarser time scales, the system becomes more rigid with respect to the possibility of momentary symmetry breaking. Symmetry breaking is the bedrock of information <cit.> and momentary symmetry breaking in cortex is utilized to process information (awake), consolidate memory (slow-wave sleep), or create new associations (REM). However, these symmetry-breaking events with dominance of excitatory or inhibitory neurons are then compensated by opposite dominance, maintaining a scale-invariant overall balance of ensemble excitatory-inhibitory neurons. In a sense, it seems that neocortex does not have the rigid symmetries and end-state spontaneous symmetry breaking of crystals or magnets that bring those systems to a minimum energy state. Rather, to maintain the capacity to process information reliably, cortex sacrifices the minimum energy state through a rebound after any symmetry breaking. The dynamic properties of ensemble excitatory-inhibitory balance show parallels with symmetry breaking in dissipative structures <cit.>. Previously, the link between the nature of dissipative non-equilibrium and symmetries in dynamical systems prompted considerations of symmetry breaking in large-scale neural activity <cit.>. This heuristic model of visual hallucination (an extension of <cit.>) has certain biologically implausible features (such as homogeneous planar sheet of neurons with radial isotropic connections), but the presence of both excitatory and inhibitory neurons and recurrent feedback enables the neural field equation to produce a variety of spatial patterns when instability of the homogeneous resting state arises. What is shared between this model, our quantitative frame, and dissipative structures is the presence of symmetry breaking and unstable symmetry as their dynamical signatures. In fact, the instability of symmetry in dissipative structures may be the reason why biological systems need to constantly harvest energy to return to the unstable symmetric state <cit.>. We speculate that perhaps the significant energy usage in cortex is to re-establish the unstable symmetry that is constantly breaking during information processing. Dissipative structures share many attributes with biological systems, such as multistability, sustained oscillations, and spatiotemporal patterns manifesting as propagating waves <cit.>. Our prior examinations of cortical field potentials show the presence of anisotropic traveling waves and dominant oscillatory events across the wake/sleep cycle <cit.>. These are signatures of a complex adaptive system with many computational length scales, lending itself to a form of invariant ensemble excitatory-inhibitory balance where momentary symmetry breaking at fine timescales provides the capacity to process information. Note that seizures are manifested by abnormal symmetry breaking that does not stay limited to momentary excitatory or inhibitory dominance but rather turns to disruption of ensemble balance across all scales. In parallel with this multiscale breakdown in ensemble excitatory-inhibitory balance, a shift of spiking energy in the estimated synaptic current coincides with transient oscillatory changes in LFP (a rapid δ-like rise of higher frequencies and suppression of slow frequencies). During this period of abnormal symmetry breaking, information processing comes to a halt and the patient suffers from loss of consciousness. The multiscale breakdown of balance is then followed by a return to a balanced state, further confirming the presence of scale-invariant balance in a complex adaptive fashion. It is important to emphasize the distinction between the simplicity of laws and their outcomes. The outcomes, such as the intricate and seemingly chaotic patterns of spiking activity, do not have to possess the same simplicity as the underlying organizing principles, namely the scale-invariant ensemble excitatory-inhibitory balance. By quantifying the catalog of invariances (balance across states) at different temporal and spatial scales, we can search for governing principles and diverse neural activity patterns that emerge from these simple laws. § METHODS §.§ Experimental data For this study, we used recordings from multielectrode array implants in layers II/III of the human temporal cortex. The electrodes in these arrays are arranged in a 10x10 grid with 400 μ m spacing, covering an area of 4x4 millimeters (see Fig.<ref>). The four corner electrodes are used for grounding, leaving a total of 96 active recording electrodes. Data is initially sampled at 30kHz, then filtered and thresholded for spike detection. We used a combination of spike-waveform characteristics and the sign of the short-latency cross-correlogram of spike time pairs to classify units as either excitatory (E) or inhibitory (I). Video, electrocorticogram (ECOG), and scalp EEG were used to classify the 12-hour recordings into state labels: Awake, Light Sleep (stages II and III), Deep Sleep (Slow-Wave Sleep), and Rapid Eye Movement (REM). For further details on recording methods and morpho-functional characterization of excitatory and inhibitory neurons, see the cited references <cit.>. §.§ Renormalization: Spiking coarse-graining and field estimates §.§.§ Ensemble Excitation and Inhibition. If we define each neuron as an independent particle and its stochastic spiking as random fluctuations in space, the observed randomness can be absorbed by redefining a cluster of neurons (instead of a single neuron) to account for self-interactions, pairwise interactions, and higher-order interactions. This ensemble is then temporally coarse-grained by summing all spikes in a given time-bin, with scales ranging from 1 millisecond to 10 seconds (equally spaced in a logarithmic fashion) to specify the scale of observation. The logarithmic scale is chosen to reduce computational burden, with a denser spread of scales at finer time resolutions. While these recordings yield a high number of neurons from each array, we face both massive subsampling (in our case, hundreds of neurons out of tens of thousands in the columnar region of recording) and spatial nonuniformity. Although we had a 4/1 match of the number of excitatory and inhibitory neurons with anatomical data and a 4/1 higher spiking rate of inhibitory to excitatory neurons <cit.>, given the subsampling and nonuniformity, the ensemble series are normalized by the number of neurons. This procedure was done independently for excitatory and inhibitory cell categories to yield ensemble fractions of excitation and inhibition. §.§.§ Synaptic current. To explain the multiscale dynamics of spiking excitation and inhibition with multiscale representation of local field potentials (LFPs) using their wavelet transforms, we used unit activity to reconstruct estimated synaptic current. It is suggested that synaptic current generates LFPs <cit.>, which reflect a field measure of the tens of thousands of neurons present in the recording area from which spikes are sampled. The modeling involved convolving each spike with an exponential kernel <cit.>. C(t) = ∫_-∞^∞ D(t')exp[-(t-t'/τ_s)]dt' Here: C(t) is the estimated synaptic current, D(t) represents the spike trains, τ_s is the time constant for the synaptic current. Different time constants (τ_s) are used for the convolution depending on whether the spike is excitatory (3 milliseconds) or inhibitory (10 milliseconds). This is based on prior studies that found differences in the time constants for excitatory and inhibitory synaptic currents <cit.>. Synaptic current at any time is a weighted sum (or integral, in the continuous case) of the spike train, where the weights decrease exponentially as the time difference between the spike and the current time increases. After convolving the spike trains with their respective kernels, the estimated synaptic current is obtained by subtracting the ensemble of inhibition from the ensemble of excitation. This result is then corrected by the number of cells in each category (excitatory or inhibitory). §.§ Randomization of ensemble activity §.§.§ Random permutation. We used this test to verify that randomizing the aggregate spike series by itself cannot mimic the observed excitatory-inhibitory balance. After calculating the inter-spike interval (ISI) for the pooled ensemble series of in a given functional category (Excitatory or Inhibitory), a random permutation of the ISI was performed for that functional category ensemble series. This was followed by a cumulative summation of the ISI to create a new temporal order of ensemble spikes. This process ensures that the randomized ensemble series has the same number of spikes and set of ISIs as the original ensemble series, but with a different arrangement of spikes within the series. The observations verify that the excitatory and inhibitory balance is not reproducible by randomization of ensemble spikes and the observed fluctuations f_obs = Ens_E-Ens_I are not results of random events such as f_randperm = SurrogateEns_E-SurrogateEns_I. §.§.§ Fixed-ISI circular shift of spike ensemble. Before creating the ensemble series, the inter-spike interval (ISI) of each unit’s spike series was calculated (by “unit” here, we refer to a single neuron). The spikes of each unit were then shifted based on a random value between 1 and the maximum ISI of that unit’s spike series. The randomized units were then aggregated to create the randomized ensemble series of a given functional category (Excitation or Inhibition). This process ensures that the resulting ensemble series is made up of units with intact internal spike timing but with disrupted timing between units in that functional category. The results show that by destroying the relation between ensemble inhibition and ensemble excitation while preserving their internal structure, the observed fluctuations and most importantly, the tightly bound relation of Excitation and Inhibition is lost. §.§ Multiscale collapse of ensemble activity The observed fluctuations of ensemble activity, represented by f_obs = Ens_E-Ens_I, were first sorted in ascending order. A cumulative summation was then performed on the series resulting in a parabolic curve showing mirror symmetry due to the overall balance of ensemble Excitation and Inhibition (Fig.<ref>). To examine self-similar or self-affine characteristics, a collapse curve was obtained by rescaling the curve for a given scale so that it was bounded between 0 and 1 (x-axis normalization). We then systematically identified non-collapsible regions for each of the comparisons (A: data against random series; B: comparison of different scales; C: comparison of different states across various scales). A collapse curve is then a rescaled shape (of cumulative sum of sorted fluctuation of ensemble activity) for different duration that collapses onto a single universal curve. The general form of collapse curve for ensemble excitation and inhibition can be written as: C(t)=F(t)-F_min/F_max - F_min , where C(t) is the collapse curve, F(t) is the cumulative sum of sorted fluctuation of ensemble activity, F_min and F_max are the minimum and maximum values of F(t), respectively, and t is the time scale. §.§ Ensemble activity partition Let A denote the fraction of ensemble activity of a given functional category (either excitation or inhibition), let B denote the discrete order of A given by values b_1,b_2 ..., b_n (in non-decreasing order) and their probabilities g(b_j):=(B=b_j). The partition curve is defined as a continuous piece-wise linear function (P) connecting the points (C_i,P_i) such that: C_i :=∑ _j=1^ig(b_j) D_i :=∑ _j=1^ig(b_j) b_j P_i :=D_i/D_n ,with B bounded between 0 and 1, the partition curve represents a size-based distribution (as a list of values) that indicates the percentage of ensemble activity of a certain size or in a certain size interval. The partition curve is convex and lies below the diagonal (45 degree) line which represents equivalence of normalized ensemble activity across time. The closer the convex partition curve is to the diagonal line, the lower the ensemble activity unequivalence. The further away the partition curve is from the 45 degree diagonal line, the greater the unequivalence in the distribution of ensemble activity. Color-coded curves in Fig.<ref> show the partition curves for various scales (darker colors represent finer scales while brighter colors represent coarser scales). The black partition curve is calculated from a normal distribution for comparison. §.§ Multiscale descriptors of co-occurrence of Excitation and Inhibition To analyze the properties of the joint probability of the fraction of Excitation and Inhibition across many temporal scales, we implemented a multidimensional textural feature analyses. From the multidimensional space of excitation/inhibition/scale (EIS), we calculated a “Multiscale Normalized Co-occurrence Matrix” (MNCM) for multiple distances and orientations following the volumetric adaptation <cit.> of textural analysis <cit.>. MNCM was created by calculating the frequency of occurrence of pairs of neighboring discrete points (pixel) in the multidimensional EIS space with specific normalized values. The spatial relationship between the pixels can be defined using an “offset” parameter (where the offset relates to the spatial co-occurrence of E vs I in the EIS). Each entry in the M-CM indicates the number of times a pixel with value i is adjacent to a pixel with value j. This spatial dependence matrix provides the distribution of similar values (of E and I) at given offsets (i.e. the number of times that a given value of ensemble E co-occurred with a given value of ensemble I, considering the EIS). We then used MNCM to measure features of distinct properties, such as contrast category (contrast, homogeneity, dissimilarity), descriptive measures (mean, variance, correlation), and orderliness features (entropy, angular second moment or energy). Considering P(i,j) as the joint distribution of i (ensemble excitation dimension) and j (ensemble inhibition dimension), of the original multidimensional matrix, then p(i,j) is the (i,j)th entry in a normalized MNCM (i.e. spatial dependence matrix). The normalization of P(i,j) is performed using the sum of all its values. N_g is the number of distinct normalized levels in the multidimensional space of excitation/inhibition/scale obtained after the operation of MNCM. The features from the category of “descriptive statistics” were defined as: Correlation = ∑_i ∑_j (i,j)p(i,j)-u_x u_yσ_x σ_y, where px and py are the partial probability density functions; u and σ are their mean and std, respectively. Measures from the “contrast” group were: Contrast = ∑_i=1^N_g∑_j=1^N_g (i-j)^2 p(i,j) Dissimilarity = ∑_i=1^N_g∑_j=1^N_g (i-j) p(i,j) (similar to contrast, but instead, with linear weights.) Homogeneity = ∑_i=1^N_g-1∑_i=1^N_g-111+(i-j)^2p(i,j) and the features of “orderliness” were: Energy = ∑_i=1^N_g∑_j=1^N_g p(i,j)^2 ≡ sqrt( Angular Second Moment) Entropy = -∑_i=1^N_g∑_j=1^N_g p(i,j)log(p(i,j)) The “Descriptive Statistics Group”, “Contrast Group”, and “Orderliness Group” capture different aspects of the MNCM: `Descriptive Statistics Group is characterized by statistical metrics that describe the overall distribution of point intensities in a given scale. One key feature of this group is “Correlation”, which measures the joint probability occurrence of specified pairs. Contrast Group includes several measures of contrast or variation in intensity within an scale. “Homogeneity” measures the closeness of the distribution of elements in the MNCM to the MNCM diagonal. “Contrast” measures the local variations in the MNCM. “Dissimilarity” measures the variation of pairs in the MNCM. Orderliness Group includes measures of order or regularity within a given scale. “Energy” (also known as second angular moment) provides the sum of squared elements in the MNCM, and “Entropy” measures the complexity or disorder in a given scale. The application to volumetric (multiscale) data involves computing these features for each scale independently and then combine the resulting feature vectors into a single feature vector representing the entire multidimensional space. §.§ Wavelet Coherence and Ridge Detection To relate the frequency structure of the local field potential (LFP) with the estimated synaptic current, we used wavelets to calculate coherence <cit.>. The wavelet cross-spectrum is defined as the product of the wavelet transform of signal (x) and the complex conjugate of the wavelet transform of signal (y). The squared absolute value of the smoothed cross-spectrum normalized by the smoothed individual power spectra defines wavelet coherence. W_n^XY(s) = W_n^X(s)W_n^Y^*(s) C_n^2(s) = | ⟨ W_n^XY(s)⟩.s^-1 |^2/⟨ |W_n^XX(s) | .s^-1⟩⟨ |W_n^YY(s) | .s^-1⟩ To extract ridges from the wavelet, we used the crazy-climber algorithm. This stochastic relaxation method is based on Markov chain Monte Carlo (MCMC) and can extract multiple ridges from analyzed signals <cit.>. In its algorithmic procedure, the energetic distribution in the time-frequency representation of the LFP is used to extract multiple ridges across frequency ranges of LFP. § REFERENCES
http://arxiv.org/abs/2306.08198v1
20230614015317
Explainable and Position-Aware Learning in Digital Pathology
[ "Milan Aryal", "Nasim Yahyasoltani" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Explainable and Position-Aware Learning in Digital Pathology Milan Aryal and Nasim Yahyasoltani The authors are with the Department of Computer Science, Marquette University, Milwaukee, WI, USA (e-mail: {Milan.aryal, nasim.yahyasoltani}@marquette.edu). Part of this work was presented at the IEEE ICASSP Conference held in Greece, during June 4-10, 2023. July 31, 2023 ============================================================================================================================================================================================================================================================================================================ Encoding whole slide images (WSI) as graphs is well motivated since it makes it possible for the gigapixel resolution WSI to be represented in its entirety for the purpose of graph learning. To this end, WSIs can be broken into smaller patches that represent the nodes of the graph. Then, graph-based learning methods can be utilized for the grading and classification of cancer. Message passing among neighboring nodes is the foundation of graph-based learning methods. However, they do not take into consideration any positional information for any of the patches, and if two patches are found in topologically isomorphic neighborhoods, their embeddings are nearly similar to one another. In this work, classification of cancer from WSIs is performed with positional embedding and graph attention. In order to represent the positional embedding of the nodes in graph classification, the proposed method makes use of spline convolutional neural networks (CNN). The algorithm is then tested with the WSI dataset for grading prostate cancer and kidney cancer. A comparison of the proposed method with leading approaches in cancer diagnosis and grading verify improved performance. The identification of cancerous regions in WSIs is another critical task in cancer diagnosis. In this work, the explainability of the proposed model is also addressed. A gradient-based explainbility approach is used to generate the saliency mapping for the WSIs. This can be used to look into regions of WSI that are responsible for cancer diagnosis thus rendering the proposed model explainable. Computational pathology, graph learning, position embedding, whole slide images. § INTRODUCTION To diagnose and classify cancer, a tissue sample or excision need to be evaluated by a pathologist. To this end, tissue samples are processed and cut into small layers before being put on a glass slide and are then stained using histological hematoxylin and eosin (H&E). The pathologist can diagnose any malignancy by scanning and examining the slides under the microscope. Digitized versions of H&E slides are known as WSIs <cit.> where they can be digitized at different microscopic resolutions. Despite the high-resolution and computational complexity of WSIs, their use in computational pathology and machine-based cancer diagnosis is increasing. In fact, recent advances in deep learning has facilitated the adoption of WSIs for the computer-aided diagnosis and grading of cancers. However, even with recent progresses made in computational methods for digital pathology, processing of WSIs for training in deep learning is considered a challenging task. The fact that WSIs have billions of pixels in a single file typically larger than a gigabyte, makes it challenging to train with common methods such as CNN. Recently, multi-instance learning (MIL) approaches have been shown to be very successful in learning of the WSIs <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. In MIL, instead of learning individual labels, the instances are bagged together and the training is performed on them. More specifically, the WSI can be broken into many patches and then bagged. In the case of binary classification tasks such as cancer diagnosis, if the bag consists of a single patch of cancerous tissue, the WSI is labeled as positive; otherwise it is categorized as negative. Once the patches are bagged together they are passed through CNN and transfer learning for training. Although, the accuracy obtained through this method is very promising, there are limitations that need to be considered. For example, MIL does not take into account the neighboring patches and some patches in WSI may not get selected for training. Currently, the research in WSI learning is shifting from MIL learning to graph-based learning <cit.>, <cit.>, <cit.>, <cit.>. Representation learning in graphs is performed through an encoder that maps nodes to a low-dimensional embedding space. Encoders encode nodes as low-dimensional vectors that summarize their local neighborhood. These embeddings enable tasks like node classification, link prediction, graph classification, and graph generation <cit.>, <cit.>, <cit.>. Recently, graph neural networks (GNNs) has become a very popular way to encode nodes for embedding. To learn node embeddings, GNNs take into account the local neighborhood information of a node. The advantage of graph-based learning over MIL is that full-resolution WSI can be represented as a graph, thus encoding tumor neighborhood information. Then, the whole graph is learnt through a variant of message passing algorithms followed by the global aggregation method and multi-layer perceptron (MLP) for the cancer classification. Despite all the recent advances in the adoption of GNN in WSI classification, they fall short in distinguishing nodes with similar local neighborhoods  <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. In fact, the position of the nodes with respect to the graph is not captured in traditional GNN and if two nodes are located in topologically isomorphic neighborhoods, their embedding is almost identical. To this end, modifying GNN such that it takes into account node features and captures the position of the nodes is well motivated for improving representation learning <cit.>. With WSI broken into patches, one needs to take into account the spatial information of each patch and capture the information between those patches. This can be done by capturing the position embedding of each patch. Through an spline CNN algorithm <cit.>, we aim at capturing the position embedding of patches. Once the position embedding of the patches is captured, we then use an attention-based method to perform the message passing among patches. As not all patches are cancerous in WSI, we want to learn differently from each patch and give different attention to it. This way, we can identify cancerous regions, more efficiently. The degree to which the deep learning model can be explained is of critical significance. The more readily explicable a model is, the more effectively it may be interpreted. When the model's interpretability and explainability are improved, the prediction that the model generates can be comprehended more easily. This helps in constructing models that are more trustworthy, and thus more applicable in medical diagnosis. Simple machine learning models, such as linear regression, are highly interpretable, but deep learning models are generally less interpretable due to their complexity. However, deeper models often offer a higher accuracy compared to simpler models <cit.>. Recently, there has been more focus on developing algorithms to make complex models explainable. The use of CNN has huge success in vision related task such as image classification, segmentation and object detection. The explainablity of CNN-based algorithms has been widely explored in recent years. In <cit.>, <cit.>,  <cit.>, <cit.> CNN model is made explainbale by visualizing the area of interest in image classification. The explainability model for CNN model has been expanded to graph-based models as well in <cit.>. In graph-based learning the explainabilty amounts to identifying the importance of each node in prediction. The explainability of a model is of paramount importance in the context of cancer detection, as it helps to increase trust in the model's predictions. When working with WSI, it is essential to pinpoint the particular areas of the image that are accountable for making the cancer diagnosis. Because malignant cells may be concentrated in some areas, these regions might not necessarily be present throughout the entirety of the image. In this work, the goal is to use an explainability method that is based on graph neural networks in order to locate areas related top malignancy inside WSIs. By identifying the specific regions of the image that are responsible for the cancer diagnosis, the model can provide more accurate and trustworthy predictions. To summarize, the contribution of this work is three-fold: 1) It introduces a graph-based self-supervised position-aware algorithm for cancer diagnosis using WSIs; 2) To encode position information of the patches in the WSIs obtained through spline CNN, an attention network is proposed to promote different weights on cancerous versus non-cancerous nodes; and 3) Based on gradient-weighted class activation mapping (Grad-CAM), an explainability algorithm for the proposed model is developed. Using Grad-CAM the heatmaps are generated to identify the regions of interest in WSIs which can be later used to visualize and detect the cancerous regions. This paper is organized as follows. Section <ref> addresses the related work. In section <ref>, the proposed method is presented. The implementation details are shown in section <ref>. Section <ref> presents the performance of the algorithm for different types of cancer WSIs and discussion of the results is followed in section <ref>. The conclusion is provided in section <ref>. § RELATED WORK The use of GNN-based methods are increasing in WSI for cancer diagnosis. The WSIs are images of digitized histopathological slides and are not originally represented as graphs. So, the first challenge in using graph-based learning approaches is to find a way to represent the image data as a graph. Once WSIs are encoded as graphs, different graph representation learning methods can be deployed. The common approach to construct the graph is to form cell graph  <cit.>. In this method, first the nuclei in the WSIs are identified and represented as the nodes in the graph. Then, the graph is formed with edges connecting other neighboring nuclei in the tissue. Another approach is to form patch-graph method. In this method, the WSI is broken into patches as nodes and the graph is constructed from those nodes. In <cit.>, the authors addressed the classification of WSIs for breast cancer using graph convoutional network (GCN) <cit.>. More specifically, the WSIs were constructed as graphs using the constituents cells with nuclear morphology as vertex features and the gland formation as edge attributes. In <cit.>, a model for gleason score grading of prostate cancer has been proposed. The authors constructed a graph with nuclei as the nodes and the distance between each node as the edge. The features of each node are extracted using contrastive predictive coding and morphological features. Then, this graph was passed into GCN for the grading of high-risk and low-risk cancer patients. These methods use cell nuclei for the construction of the graphs from WSIs. Other methods used in the graph construction from WSIs include patch-based methods. In <cit.> GCN-based multi-instance learning method is implemented for the classification of lymph node. First, the patches with regions of interest are selected and bagged together. Then, this bag of patches is passed into auto-encoder and GCN for feature selection and classification, respectively. In <cit.> the genetic mutation of colon cancer is modeled. Randomly selected patches from each WSIs act as nodes and CNN is used to extract features for each node. The constructed graph is passed through spatial-GCN to predict the mutations. Survival analysis is performed using graph-based methods in <cit.>. The existing methods for the graph-based learning of WSIs use popular message passing techniques such as GCN and graph attention network (GAT), which do not take node position in the graph into account. In this work, position embedding is also captured. Explainability is another aspect in improving the interpretability of a model. This makes it easy to explain the model and its outcome. The literature in explainabilty of graph-based models in WSIs is very limited. In <cit.> the authors have presented an explainability method based on the importance of the nuclei for cancer classification. § METHOD The system diagram of the proposed model is presented in Fig. <ref>. The process begins by converting the WSI into a graph, with each patch of the WSI being represented as a node and edges connecting those nodes. In this work, the model aims to learn the position of each node in the graph. This allows to maintain the spatial information of each patch in WSI even though it is represented as graph data. We deploy Spline CNN to learn the position of each node in the WSI  <cit.>. After capturing the position embedding, the graph is passed through the attention network. The model consists of 2 layers of attention network. The first layer of attention network consists of multi-headed attention followed by single attention head network. Following the attention layer, the model classifies the WSI using the graph classification. The results from attention layers are global pooled using mean pooling. This layer is forwarded to multi-layer perceptron network and then to a classification layer for cancer grading. §.§ Graph construction and feature extraction The graph data structure is represented as G=(V,E,X) where V, E and X denote the set of nodes, the edge set, and node features, respectively. It is necessary to transform the WSI into graphs in order for each WSI to be represented as G=(V,E,X). The aim of using graph-based learning for WSIs is to learn from the whole resolution of WSI. So, the graph is constructed from the highest level of resolution of WSI. The graph formation of WSI is implemented by breaking it into smaller patches. In this work, WSI is broken into non-overlapping patches of size 256×256. Since each patch acts as a node in the graph, its position in the WSI is used to construct connectivity between corresponding neighbors. Based on fast approximate k-nearest neighbor <cit.>, the connectivity between nodes are established. Since each WSI is different in size, the number of nodes in each graph is different. For the constructed graph in this work, we consider V as the set of patches (nodes) and the edge set E is given by connection between those patches. With N as the number of nodes in the graph G, the set of features for each node, i.e., X≜{x_v}_v=1^N is obtained by extracting features from each node, where x_v is the feature vector of node v. For feature extraction of each patch, self-supervised learning method is deployed. Typically, medical data suffers from limited number of annotated data. The lack of annotated data makes self-supervised algorithms suitable for extracting features from patches. More specifically, contrastive learning framework is used to train patches <cit.>, <cit.>. This way, feature extraction of patches is performed with an unsupervised learning approach. In Fig. <ref>, the feature extraction method for patches based on contrastive learning is visualized. The contrastive learning approach maximizes the similarity between the positive pair and dissimilarity among the negative pair. The augmentation is used to generate multiple view of the same images to create the positive pairs. When the network trains the augmented view of different patches, those are considered to be negative pairs. §.§ Position embedding in the graph Graph learning entails embedding nodes into lower dimensions. The node embedding is performed by an encoder based on different information about the node and its neighbors. The embedding space is optimized so that the distance between nodes captures the relative position of the original graph. The encoder function f: V Z maps nodes v ∈V to embeddings Z≜{z_v}_v=1^N∈ℝ^d, where d refers to feature size in the embedding space. The most common approaches for embedding include matrix factorization <cit.>, <cit.>, random walk and neural learning methods such as graph neural learning. In matrix factorization, node embedding is constructed based on node similarity matrix. Some of the popular random walk methods include DeepWalk <cit.>, node2vec <cit.>. With capability to learn from node features and inductive learning, GNNs are most commonly used in node embedding. Due to unstructured nature of graph data, the positions of the nodes are not naturally encoded in them. In <cit.>, it is shown that popular graph-based learning methods such as GNN are capable of capturing structural information, but the position of the node is not taken into account. As a result, the nodes with different labels having similar neighborhoods will end up with the same embedding and similar representations. To avoid this issue and make the learning aware of the position of the nodes in the graph, it is useful to have positional embedding. In this work, WSIs are represented with 2D coordinates and the goal is to classify each WSI incorporating the position of each patch in the WSI graph. The coordinates of each patch is the midpoint of each patch in the WSI. The proposed algorithm to capture the position of the nodes is based on spline CNN <cit.>, <cit.>. Spline CNN uses x,y coordinates to learn the geometry of the graph. We deploy B-splines <cit.> as presented in <cit.> to learn nodes positions. In Spline CNN, with set of neighboring nodes defined as N_i, the feature vector of node i, denoted by f_i is given by <cit.>, <cit.>: f_i = 1/|N_i|∑_j∈ N_i f_j· g(u(i,j)) where g(u(i,j)) is a kernel function based on the coordinates of nodes i and j. Suppose the coordinate of node i is given by (x_i,y_i) and the coordinate of node j is (x_j,y_j), then u(i,j) ≜ (|x_j-x_i|,|y_j-y_i|) ≜ (x,y). The kernel function is defined as follows: g(u(i,j)) = ∑_p∈𝒫 w_p · N_hori,p(x) · N_vert,p(y) In the above equation, N_vert and N_hori are B-splines bases, w_p is a trainable parameter for each element p in the Cartesian product 𝒫 of N_vert and N_hori. §.§ Graph attention network As defined in section <ref>, a graph can be represented as G=(V,E,X) where V denotes the set of nodes with number of nodes N = |V| , E refers to the set of edges. Then, X ≜{x_1,x_2,,x_N}∈ℝ^d accounts for the nodes feature vectors. Graph learning entails embedding graph nodes to a lower dimensional space. GNN is one of the most popular embedding methods used for graph representation learning where the learning is based on passing and updating messages among the nodes, iteratively. Let us denote the feature vector after message passing as X^' = {x^'_1,x^'_2,,x^'_N} in the next layer. The message passing for each node in GNN <cit.> is based on the neighboring nodes and given by: x_i^' = σ(W_selfx_i+W_neigh∑_j∈𝒩_ix_j+b) where x_i is the feature vector of the node whose message is being updated, and x_j are the neighboring nodes feature vectors. Then, W_self, W_neigh are the trainable weight parameters updated during the message passing from itself and neighboring nodes, respectively and σ is the activation function. Graph convolutional neural network (GCN) <cit.>, as a variant of GNN is very similar to CNN. In GCN message passing aggregates the message from its neighboring nodes features. The message update for the GCN follows: x_i^' = σ(∑_j∈𝒩_iWx_j/√(|𝒩(j)||𝒩(i))) where σ is the activation function and accounts for the non-linearity, and W are the trainable parameters updated during the training of GCN. is The degree of neighboring node j is denoted by |𝒩(j)| and |𝒩(i)| accounts for the degree of the node i for which message passing is being performed. Using GCN, the message passing to a node is achieved via its neighboring nodes and all the neighboring nodes are equally accountable. The graph attention network (GAT) modifies this convolutional operator by assigning attention weight in message passing stage among nodes. The attention mechanism allows the network to learn the specific weights from the specific nodes. This allows to learn the importance of each neighboring nodes in message passing. More specifically, the message passing in GAT <cit.> is given by: x_i^' = σ(∑_j∈𝒩_iα _ijWx_j) where α _ij is the attention weight between nodes j and node i. The attention weights are given by: α_ij = exp(LeakyReLU(a^T[Wx_i||Wx_j]))/∑_k∈𝒩exp(LeakyReLU(a^T[Wx_i||Wx_j])) where || is the concatenation operator and Leaky Rectified Linear Unit (LeakyReLU) was used as activation function. The stabilization of the self-attention mechanism is obtained by adding the multi-head attention. This operation is achieved by applying K independent attention-based operators and then concatenating them. Adopting multi-head attention network with K heads, (<ref>) is replaced by: x_i^' = σ(1/K∑_k=1^K∑_j∈𝒩_iα _ijW^kx_j) In this work, two layers of multi-head GAT is considered. Use of attention-based method allows for different weight to nodes in the network. In cancer diagnosis, through a graph-based WSI, neighboring nodes might consist of both cancerous and non-cancerous patches. To this end, assigning different importance weights to different nodes in a neighboring region in message passing is well motivated. §.§ Explainability of the model The application of GNN in cancer classification has the potential to be of significant assistance if the regions responsible for cancer classification can be pinpointed. The physicians and healthcare team need a model to be interpretable, so that the reasoning behind the suggested prediction can be well justified and understood. When it comes to the task of predicting cancer especially using deep learning models, explainability is of the paramount importance. When WSI is examined under the microscope by a pathologist, the pathologist is screening for possible areas inside the tissue that could be to blamed for cancer. The classification of WSI, together with the identification of the regions that causes a particular model for classification in WSI, could be of tremendous assistance to pathologists as well. In this paper, we deploy explainability method implemented in graphs that are mainly tailored to CNN <cit.>, <cit.>. In particular, graph Grad-CAM method for WSI is implemented. The choice of this approach is inspired by  <cit.> as the authors address explainability for molecular graphs. As shown in  (<ref>) in the layer l of GNN, feature k of the node n is given by x_k,n^l. Then, the average pooling of the feature at layer l is given by: e_k = 1/N∑_n=1^Nx_k,n^l Based on this average pooling score e_k, and weights w_k^c for predicting class c from feature k, the class score y_c can be calculated as shown below: y_c = ∑_k w^c_k e_k The following equation for the Grad-CAM gives each feature importance score based on class and layer: α_k^l,c=1/N∑_n=1^N∂ y^c/∂x_k,n^l Then, the heatmap for each node is generated by applying Rectified Linear Units (ReLU) <cit.> functions as: L^c_Grad-CAM[l,n] = ReLU(∑_k α_k^l,cx_k,n^l ) The advantage of using Grad-CAM is that the heatmaps for the nodes can be calculated from any layer of the network. § IMPLEMENTATION DETAILS §.§ Dataset Using two different datasets, we examined how well the proposed position-aware and graph attention-based model works for grading WSIs. To be more specific, we tested the performance of models with samples from both prostate and kidney cancers. After lung cancer, prostate cancer is the type of cancer that is responsible for the second highest number of deaths in men <cit.>. Kidney cancer is a prevalent form of cancer that can impact both sex and ranks in the top 10 cancers that result in mortality <cit.>. The prostate dataset consists of biopsies from the WSIs taken from biopsies from the prostate glands and the WSIs for kidney dataset are obtained from the kidney resections (larger samples). The sample WSIs of each dataset are shown in Fig. <ref>. It can been seen that the structure of a biopsy is very different from a kidney resection as shown in Fig. <ref>, The Cancer Genome Atlas (TCGA) has been used for the kidney cancer dataset <cit.>. The TCGA portal provides various kinds of data for cancer research, such as demographics of the patient, WSIs, and biomarkers. The data related to kidney cancer consists of the most prominent type of kidney cancer, renal cell carcinoma, which is addressed in this paper. In kidney cancer, the most prevalent malignancy Renal Cell Carcinoma <cit.> accounts for more than 90% of cases. In our work, three major types of RCC grouped as KIRC (Kidney Renal Clear Cell Carcinoma), KIRP (Kidney Renal Papillay Cell Carcinoma) and KICH (Kidney Chromophobe) are considered. The WSI dataset related to prostate cancer has been accessed from  <cit.>. In prostate cancer, we look into the problem of grading the prostate cancer into different ISUP grades. ISUP grades consist of 5 grades of Gleason Grading (GG) <cit.> for the prostate cancer. In Table <ref> the number of samples of each gleason grading for prostate cancer is listed. Similarly, Table <ref> shows the number samples of each type of cancer in kidney dataset. §.§ Training settings Initially, the patches were trained for feature extraction for graph learning. To handle unlabeled patches, contrastive learning as a self-supervised method was used to train the patches. Then, MOCOv3 <cit.> with ResNet50 <cit.> as backbone was deployed to train the patches for feature extraction. The training was performed for 75 epochs with initial learning rate of 1×10^-3 and the weight decay of 1×10^-4. Cosine scheduler was chosen to update the learning rate during the training. For graph learning training, the network consists of two layers of Spline CNN followed by two layers of graph attention. This is then forwarded to a MLP for cancer grading. The training was done for 20 epochs with initial learning rate of 1×10^-3 and weight decay of 5×10^-4. The AdamW <cit.> optimizer was used for training. The batch size for each epoch was chosen as 2. We use Kappa score <cit.> to measure the performance of the model. § RESULTS The proposed approach was evaluated using the two different cancer datasets. The performance of the proposed model is compared with self-supervised GCN-based model <cit.> and MIL-based leading methods <cit.>. The position-aware method that was proposed for grading and classifying cancer was initially evaluated using prostate cancer dataset. To this end, a classification of cancer based on one of the the five ISUP grades can be obtained. On this dataset, the proposed approach yields a Kappa score of 0.912 as its outcome. Evaluating the cancer diagnosis and grading with a GCN-based and a MIL-based models for comparison, the Kappa scores are 0.899 and 0.87, respectively. The proposed method was also applied to a dataset consisting of kidney cancer data. The obtained kappa scores are very comparable. The proposed model obtaines a Kappa score of 0.941 when it was applied to the kidney dataset. In comparison, the Kappa score for the GCN-based and MIL-based models are 0.939, and 0.868, respectively. As can be seen in Table <ref>, the addition of patch position into WSI graph learning results in a significant improvement in cancer grading and classification. In general, these results point to the fact that the proposed position-aware method is a promising approach for the diagnosis and treatment of cancer. This was verified with different types of WSI samples including both prostate biopsies and Kidney resections to classify tumors affecting the prostate and the kidney. §.§ Explainability visualized to the WSI To better demonstrate the power of the proposed algorithm, it is crucial to investigate the explainabilty of the model. In other words, the expectation is that the algorithm can identify the regions and areas in the WSI that are responsible for the cancer diagnosis. This result verifies that the proposed model is interpretable. To this end, the heatmaps for each node is generated after the activation layer and is superimposed with the WSI. Then, the generated heatmap is compared with the original annotation provided in the dataset. The heatmaps are generated for two cases of prostate cancer, one with an ISUP grade of 4 and the other with an ISUP grade of 5. In Fig. <ref> the top image shows the heatmap for prostate cancer with an ISUP grade of 4, while the bottom image shows the heatmap for the prostate cancer with an ISUP grade of 4. The top left image (i) shows the original WSI for ISUB grade 4. As we can see in the image the areas where the cancer is spread the tissue is not smooth and the growth pattern is irregular. This is further confirmed in the top right image in (iii) which shows the annotation provided for the WSI and clearly depicts the cancerous regions in the WSI. The image on the top middle (ii) is generated using Grad-CAM for ISUB grade 4 WSI and the red regions demonstrate the areas responsible for the cancer diagnosis. The identified regions by Grad-CAM are very aligned with the annotations provided by the pathologist as shown in (iii). Similar trends hold for WSI of ISUB grade 5, as shown in Fig 5, (b). As it can be seen, the annotation matches with the cancerous areas identified by the Grad-CAM algorithm. § DISCUSSION In this work, graph-based leaning incorporating the position of each node embedded in the WSI is developed and used to classify the prostate and kidney cancer. WSIs are large dimensional, multi-resolution digital slides that store microscopic images of tissues to be used for cancer diagnois. However, due to their size and complexity, it can be difficult to extract useful information from them using traditional methods such as CNN. To address this challenge, WSIs are broken into smaller patches. Then, the patches are used to form a graph structure. When broken into smaller patches, the position of each patch in the WSI is not encoded in the graph data. This paper takes into account these position embeddings in the training. This is accomplished with the use of a graph kernel-based technique known as Spline CNN, which incorporates the location of each patch into the graph itself. An attention-based mechanism is employed for message passing between the nodes in the graph once the position of each patch has been incorporated in the network. Since the attention layer gives varying attention weights to the nodes, the model is able to focus in on specific regions of the image and concentrate on some of them to a greater extent than others. This attention-based strategy is used in combination with graph Grad-CAM to generate heatmaps to identify the regions responsible for the diagnosis. These heatmaps provide visual explanations of the model's prediction and can be compared with the pathologists annotation to better understand and judge the performance of the model. In addition, one can use these heatmaps as initial screening of the pathologist to detect cancerous regions before final cancer grading. The kappa score, which is a statistical measure of inter-rater agreement, was used to make a comparison between the performance of the position-aware model and that of other approaches. The results of this comparison indicated that the position-embedded method performed better than the other methods when applied to the datasets pertaining to prostate and kidney cancers. To address the explianabilty of the approach, this paper tailors the graph Grad-CAM algorithm, which was originally designed for use in other graph learning-based applications, to the proposed self-supervised position-aware algorithm for WSIs. The explainabilty of the model is based on the attention score of the second activation layer of the model. The Grad-CAM approach takes into account each node's attention score before generating a heatmap for that node. The score that is associated with each node is the one that is used for the patch that is associated with that node in the WSI. All the heatmaps for node is superimposed with the complete WSI to generate the heatmap in the WSI. By visualizing the heatmaps for each node, it is possible to understand which regions of the WSI are most important to the GNN's prediction, and how these regions contribute to the overall prediction made by the model. § CONCLUSION A self-supervised position-aware attention-based model for cancer grading and classification utilizing WSIs has been developed in this paper. The position embedding of the nodes in the WSI was accomplished by using spline CNN. In order to evaluate the effectiveness of the proposed approach, data sets of WSIs for prostate and kidney cancers were utilized. The proposed method surpasses established methodologies in cancer diagnosis, such as GCN-based and MIL-based models using kappa score as evaluation metric. In addition to cancer classification and grading, it was shown that the proposed approach is very explainable and the diagnosis supports the annotated regions in the WSI by the pathologist. IEEEbib
http://arxiv.org/abs/2306.03151v1
20230605180457
DISCount: Counting in Large Image Collections with Detector-Based Importance Sampling
[ "Gustavo Perez", "Subhransu Maji", "Daniel Sheldon" ]
cs.CV
[ "cs.CV", "cs.LG" ]
[ [ July 31, 2023 ================= Many modern applications use computer vision to detect and count objects in massive image collections. However, when the detection task is very difficult or in the presence of domain shifts, the counts may be inaccurate even with significant investments in training data and model development. We propose – a detector-based importance sampling framework for counting in large image collections that integrates an imperfect detector with human-in-the-loop screening to produce unbiased estimates of counts. We propose techniques for solving counting problems over multiple spatial or temporal regions using a small number of screened samples and estimate confidence intervals. This enables end-users to stop screening when estimates are sufficiently accurate, which is often the goal in a scientific study. On the technical side we develop variance reduction techniques based on control variates and prove the (conditional) unbiasedness of the estimators. leads to a 9-12× reduction in the labeling costs over naive screening for tasks we consider, such as counting birds in radar imagery or estimating damaged buildings in satellite imagery, and also surpasses alternative covariate-based screening approaches in efficiency. ^*equal advising contribution § INTRODUCTION Many modern applications use computer vision to detect and count objects in massive image collections. For example, we are interested in applications that involve counting bird roosts in radar images and damaged buildings in satellite images. The image collections are too massive for humans to solve these tasks in the available time. Therefore, a common approach is to train a computer vision detection model and run it exhaustively on the images. The task is interesting because the goal is not to generalize, but to achieve the scientific counting goal with sufficient accuracy for a fixed image collection. The best use of human effort is unclear: it could be used for model development, labeling training data, or even directly solving the counting task! A particular challenge occurs when the detection task is very difficult, so the accuracy of counts made on the entire collection is questionable even with huge investments in training data and model development. Some works resort to human screening of the detector outputs <cit.>, which saves time compared to manual counting but is still very labor intensive. These considerations motivate statistical approaches to counting. Instead of screening the detector outputs for all images, a human can “spot-check” some images to estimate accuracy, and, more importantly, use statistical techniques to obtain unbiased estimates of counts across unscreened images. In a related context, <cit.> proposed IS-count, which uses importance sampling to estimate total counts across a collection when (satellite) images are expensive to obtain by using spatial covariates to sample a subset of images. We contribute counting methods for large image collections that build on IS-count in several ways. First, we work in a different model where images are freely available and it is possible to train a detector to run on all images, but the detector is not reliable enough for the final counting task, or its reliability is unknown. We contribute human-in-the-loop methods for count estimation using the detector to construct a proposal distribution, as seen in Fig. <ref>. Second, we consider solving multiple counting problems—for example, over disjoint or overlapping spatial or temporal regions—simultaneously, which is very common in practice. We contribute a novel sampling approach to obtain simultaneous estimates, prove their (conditional) unbiasedness, and show that the approach allocates samples to regions in a way that approximates the optimal allocation for minimizing variance. Third, we design confidence intervals, which are important practically to know how much human effort is needed. Fourth, we use variance reduction techniques based on control variates. Our method produces unbiased estimates and confidence intervals with reduced error compared to covariate-based methods. In addition, the labeling effort is further reduced with as we only have to verify detector predictions instead of producing annotations from scratch. On our tasks, leads to a 9-12× reduction in the labeling costs over naive screening and 6-8× reduction over IS-Count. Finally, we show that solving multiple counting problems jointly can be done more efficiently than solving them separately, demonstrating a more efficient use of samples. § RELATED WORK Computer vision techniques have been deployed for counting in numerous applications where exhaustive human-labeling is expensive due to the sheer volume of imagery involved. This includes areas such as detecting animals in camera trap imagery <cit.>, counting buildings, cars, and other structures in satellite images <cit.>, species monitoring in citizen science platforms <cit.>, monitoring traffic in videos <cit.>, as well as various medicine, science and engineering applications. For many applications the cost associated with training an accurate model is considerably less than that of meticulously labeling the entire dataset. Even with a less accurate model, human-in-the-loop recognition strategies have been proposed to reduce annotation costs by integrating human validation with noisy predictions <cit.>. Our approach is related to work in active learning <cit.> and semi-supervised learning <cit.>, where the goal is to reduce human labeling effort to learn models that generalize on i.i.d. held out data. While these approaches reduce the cost of labels on training data, they often rely on large labeled test sets to estimate the performance of the model, which can be impractical. Active testing <cit.> aims to reduce the cost of model evaluation by providing a statistical estimate of the performance using a small number of labeled examples. Unlike traditional learning where the goal is performance on held out data, the goal of active testing is to estimate performance on a fixed dataset. Similarly, our goal is to estimate the counts on a fixed dataset, but different from active testing we are interested in estimates of the true counts and not the model’s performance. In particular, we want unbiased estimates of counts even when the detector is unreliable. Importantly, since generalization is not the goal, overfitting to the dataset statistics may lead to more accurate estimates. Statistical estimation has been widely used to conduct surveys (e.g., estimating population demographics, polling, etc.) <cit.>. In IS-Count <cit.>, the authors propose an importance sampling approach to estimate counts in large image collections using humans-in-the-loop. They showed that one can count the number of buildings at the continental scale by sampling a small number of regions based on covariates such as population density and annotating those regions, thereby reducing the cost of obtaining high-resolution satellite imagery and human labels. However, for many applications the dataset is readily available, and running the detector is cost effective, but human screening is expensive. To address this, we propose using the detector to guide the screening process and demonstrate that this significantly reduces error rates in count estimation given a fixed amount of human effort. Furthermore, for some applications, screening the outputs of a detector can be significantly faster than to annotate from scratch, leading to additional savings. An interesting question is what is the best way to utilize human screening effort to count on a dataset. For example, labels might be used to improve the detector, measure performance on the deployed dataset, or, as is the case in our work, to derive a statistical estimate of the counts. Our work is motivated by problems where improving the detector might require significant effort, but counts from the detector are correlated with true counts and can be used as a proposal distribution for sampling. § : DETECTOR-BASED IS-COUNT Consider a counting problem in a discrete domain Ω (usually spatiotemporal) with elements s ∈Ω that represent a single unit such as an image, grid cell, or day of year. For each s there is a ground truth “count” f(s) ≥ 0, which can be any non-negative measurement, such as the number or total size of all objects in an image. A human can label the underlying images for any s to obtain f(s). Define F(S) = ∑_s ∈ S f(s) to be the cumulative count for a region S. We wish to estimate the total counts F(S_1), …, F(S_k) for k different subsets S_1, …, S_k ⊆Ω, or regions, while using human effort as efficiently as possible. The regions represent different geographic divisions or time ranges and may overlap — for example, in the roost detection problem we want to estimate cumulative counts of birds for each day of the year, while disaster-relief planners want to estimate building damage across different geographical units such as towns, counties, and states. Assume without loss of generality that ⋃_i=1^k S_i = Ω, otherwise the domain can be restricted so this is true. We will next present our methods; derivations and proofs of all results are found in the appendix. §.§ Single-Region Estimators Consider first the problem of estimating the total count F(S) for a single region S. <cit.> studied this problem in the context of satellite imagery, with the goal of minimizing the cost of purchasing satellite images to obtain an accurate estimate. Simple Monte Carlo <cit.> This is a baseline based on simple Monte Carlo sampling. Write F(S) = ∑_s ∈ Sf(s) = |S| ·_s ∼Unif(S)[f(s)]. Then the following estimator, which draws n random samples uniformly in S to estimate the total, is unbiased: F̂_(S) = |S| ·1/n∑_i=1^nf(s_i), s_i ∼(S). IS-Count <cit.> then proposed an estimator based on importance sampling <cit.>. Instead of sampling uniformly, the method samples from a proposal distribution q that is cheap to compute for all s ∈ S. For example, to count buildings in US satellite imagery, the proposal distribution could use maps of artificial light intensity, which are freely available. The importance sampling estimator is: F̂_(S) = 1/n∑_i=1^n f(s_i)/q(s_i), s_i ∼ q. IS-count assumes images are costly to obtain, which motivates using external covariates for the proposal distribution. However, in many scientific tasks, the images are readily available, and the key cost is that of human supervision. In this case it is possible to train a detection model and run it on all images to produce an approximate count g(s) for each s. Define G(S) = ∑_s ∈ Sg(s) to be the approximate detector-based count for region S. We propose the detector-based IS-count ("") estimator, which uses the proposal distribution proportional to g on region S, i.e., with density g̅_S(s) = g(s)[s ∈ S]/G(S). The importance-sampling estimator then specializes to: F̂_(S) = G(S) ·1/n∑_i=1^n f(s_i)/g(s_i), s_i ∼g̅_S. To interpret , let w_i = f(s_i)/g(s_i) be the ratio of the true count to the detector-based count for the ith sample s_i or (importance) weight. reweights the detector-based total count G(S) by the average weight w̅ = 1/n∑_i=1^n w_i, which can be viewed as a correction factor based on the tendency to over- or under-count, on average, across all of S. is unbiased as long as g̅(s) > 0 for all s ∈ S such that f(s) > 0. Henceforth, we assume detector counts are pre-processed if needed so that g(s) > 0 for all relevant units, for example, by adding a small amount to each count. §.§ k- We now return to the multiple region counting problem. A naive approach would be to run separately for each region. However, this is suboptimal. First, it allocates samples equally to each region, regardless of their size or predicted count. Intuitively, we want to allocate more effort to regions with higher predicted counts. Second, if regions overlap it is wasteful to repeatedly draw samples from each one to solve the estimation problems separately. k- We propose estimators based on n samples drawn from all of Ω with probability proportional to g. Then, we can estimate F(S) for any region using only the samples from S. Specifically, the k-estimator is F̂_(S) = G(S) ·w̅(S) n(S) > 0 0 n(S) = 0 , s_i ∼g̅_Ω, where n(S) = |{i: s_i ∈ S}| is the number of samples in region S and w̅(S) = 1/n(S)∑_i: s_i ∈ S w_i is the average importance weight for region S. claimClaim The k-estimator F̂_(S) is conditionally unbiased given at least one sample in region S. That is, [F̂_(S) | n(S) > 0] = F(S). The unconditional bias can also be analyzed (see Appendix). Overall, bias has negligible practical impact. It occurs only when the sample size n(S) is zero, which is an event that is both observable and has probability (1-p(S))^n that decays exponentially in n, where p(S)=G(S)/G(Ω). In terms of variance, k-behaves similarly to run on each region S with sample size equal to [n(S)] = n p(S). To first order, both approaches have variance G(S)^2 ·σ^2(S)/n p(S) where σ^2(S) is the importance-weight variance. In the case of disjoint regions, running on each region is the same as stratified importance sampling across the regions, and the allocation of n p(S) samples to region S is optimal in the following sense: Suppose S_1, …, S_k partition Ω and the importance weight variance σ^2(S_i) = σ^2 is constant across regions. Assume is run on each region S_i with n_i samples. Given a total budget of n samples, the sample sizes that minimize ∑_i=1^k (F̂_(S_i)) are given by n_i = n p(S_i) = n G(S_i)/G(Ω). The analysis uses reasoning similar to the Neyman allocation for stratified sampling <cit.>, and shows that k-approximates the optimal allocation of samples to (disjoint) regions under the stated assumptions. One key difference is that k-draws samples from all of Ω and then assigns them to regions, which is called “post-stratification” in the sampling literature <cit.>. An exact variance analysis in the Appendix reveals that, if the expected sample size np(S) for a region is very small, k-may have up to 30% “excess” variance compared to stratification due to the random sample size, but the excess variance disappears quickly and both approaches have the same asymptotic variance. A second key difference to stratification is that regions can overlap; k-'s approach of sampling from all of Ω and then assigning samples to regions extends cleanly to this setting. §.§ Control Variates Control variates are functions h(s) whose integrals H(S)= ∑_s∈ S h(s) are known and can be combined with importance sampling using the following estimator: F̂_ cv(S) = G(S) ·w̅_h(S) + H(S) n(S) > 0 0 n(S) = 0 , s_i ∼g̅_Ω, where w̅_h(S) = 1/n(S)∑_i: s_i ∈ S w_h,i and w_h,i = (f(s_i) - h(s_i))/g(s_i). It is clear that F̂_ cv(S) has the same expectation as F̂_(S), but F̂_ cv(S) might have a lower variance under certain conditions (if f and h are sufficiently correlated <cit.>). For bird counting, estimated counts from previous years could be used as control variates as migration is periodic to improve count estimates (see experiments in  <ref> for details). §.§ Confidence intervals Confidence intervals for k-can be constructed in a way similar to standard importance sampling. For a region S, first estimate the importance weight variance σ^2(S) as: σ̂^2(S) = 1/n(S)∑_i: s_i ∈ S(f(s_i)/g(s_i)-F̂_(S)/G(S))^2. An approximate 1-α confidence interval is then given by F̂_(S) ± z_α/2· G(S)·σ̂(S)/√(n(S)), where z_γ is the 1-γ quantile of the standard normal distribution, e.g., z_0.025 = 1.96 for a 95% confidence interval. The theoretical justification is subtle due to scaling by the random sample size n(S). It is based on the following asymptotic result, proved in the Appendix. The k-estimator with scaling factor G(S) σ̂(S)/√(n(S)) is asymptotically normal, that is, the distribution of F̂_(S) - F(S)/G(S) ·σ̂(S)/√(n(S)) converges to 𝒩(0, 1) as n →∞. In preliminary experiments we observed that for small expected sample sizes the importance weight variance σ^2(S) can be underestimated leading to intervals that are too small — as an alternative, we propose a practical heuristic for smaller sample sizes where σ̂^2(Ω) is used instead of σ̂^2(S); that is, all samples are used to estimate variability of importance weights for each region S. § EXPERIMENTAL SETUP In this section we describe the counting tasks and detection models ( <ref>–<ref>) and the evaluation metrics ( <ref>) we will use to evaluate different counting methods. We focus on two applications: counting roosting birds in weather radar images and counting damaged buildings in satellite images of a region struck by a natural disaster. §.§ Counting Roosting Birds from Weather Radar Many species of birds and bats congregate in large numbers at nighttime or daytime roosting locations. Their departures from these “roosts” are often visible in weather radar, from which it's possible to estimate their numbers <cit.>. The US “NEXRAD” weather radar network <cit.> has collected data for 30 years from 143+ stations and provides an unprecedented opportunity to study long-term and wide-scale biological phenomenon such as roosts <cit.>. However, the sheer volume of radar scans (>250M) prevents manual analysis and motivates computer vision approaches <cit.>. Unfortunately, the best computer vision models <cit.> for detecting roosts have average precision only around 50% and are not accurate enough for fully automated scientific analysis, despite using state-of-the-art methods such as Faster R-CNNs <cit.> and training on thousands of human annotations — the complexity of the task suggests substantial labeling and model development efforts would be needed to improve accuracy, and may be impractical. r0.5 < g r a p h i c s > Count estimates with confidence intervals for two station years (i.e., KGRB 2020 and KBUF 2010) using different numbers of samples. Previous work <cit.> used a roost detector combined with manual screening of the detections to analyze more than 600,000 radar scans spanning a dozen stations in the Great Lakes region of the US to reveal patterns of bird migration over two decades. The vetting of nearly 64,000 detections was orders of magnitude faster than manual labeling, yet still required a substantial 184 hours of manual effort. Scaling to the entire US network would require at least an order of magnitude more effort, thus motivating a statistical approach. We use the exhaustively screened detections from the Great Lakes analysis in <cit.> to systematically analyze the efficiency of sampling based counting. The data is organized into domains Ω^ corresponding to 12 stations and 20 years (see Fig. <ref> in Appendix <ref>). Thus the domains are disjoint and treated separately. Counts are collected for each day s by running the detector using all radar scans for that day to detect and track roost signatures and then mapping detections to bird counts using the measured radar “reflectivity” within the tracks. For the approximate count g(s) we use the automatically detected tracks, while for the true count f(s) we use the manually screened and corrected tracks. For a single domain, i.e., each station-year, we divide a complete roosting season into temporal regions in three different scenarios: (1) estimating bird counts up to each day in the roosting season (i.e., regions are nested prefixes of days in the entire season), (2) the end of each quarter of (i.e., regions are nested prefixes of quarters in the entire season), and (3) estimating each quarter's count (each region is one quarter). We measure error using the fully-screened data and average errors across all domains and regions. Fig. <ref> shows the counts and confidence intervals estimated using k-for the first scenario on four station-years. §.§ Counting Damaged Buildings from Satellite Images Building damage assessment from satellite images <cit.> is often used to plan humanitarian response after a natural disaster strikes. However, the performance of computer vision models degrades when applied to new regions and disaster types. Our approach can be used to quickly vet the data produced by the detector to correctly estimate counts in these scenarios. We use the building damage detection model by <cit.>, the winner of the xView2 challenge <cit.>. The model is based on U-Net <cit.> to detect buildings in the pre-disaster image, followed by a “siamese network" that incorporates at pre- and post-disaster images to estimate damage. The model is trained on the xBD dataset <cit.> that contains building and damage annotations spanning multiple geographical regions and disaster types (e.g., earthquake, hurricane, tsunami, etc.). While the dataset contains four levels of damage (i.e., 0: no-damage, 1: minor-damage, 2: major-damage, and 3: destroyed), in this work we combine all damage levels (i.e., classes 1-3) into a single “damage” class. We consider the Palu Tsunami from 2018; the data consists of 113 high-resolution satellite images labeled with 31,394 buildings and their damage levels. We run the model on each tile s to estimate the number of damaged buildings g(s), while the ground-truth number of damaged buildings is used as f(s). Our goal is to estimate the cumulative damaged building count in sub-regions expanding from the area with the most damaged buildings as shown in Fig. <ref> in the Appendix <ref>. To define the sub-regions, we sort all m images by their distance from the epicenter (defined as the image tile with the most damaged buildings) and then divide into chunks or “annuli” A_1, …, A_7 of size m/7. The task is to estimate the cumulative counts S_j = ⋃_i=1^j A_i of the first j chunks for j from 1 to 7. §.§ Evaluation We measure the fractional error between the true and the estimated counts averaged over all regions in a domain S_1, …, S_k⊆Ω as: (Ω) = 1/k ∑_i=1^k|F(S_i) - F̂(S_i)|/F(Ω). For the bird counting task, for any given definition of regions within one station-year Ω (i.e., cumulative days or quarters defined in  <ref>) we report the error averaged across all station-years corresponding to 12 stations and ≈ 20 years. For the damaged building counting problem there is only a single domain corresponding to the Palu Tsunami region. In addition, we calculate the average confidence interval width normalized by F(Ω). We run 1000 trials and plot average metrics ± 1.96 ×std. error over the trials. We also evaluate confidence interval coverage, which is the fraction of confidence intervals that contain the true count over all domains, regions, and trials. § RESULTS In this section, we present the results comparing detector-based to covariate-based sampling. Also, we show reductions in labeling effort and demonstrate the advantages of estimating multiple counts jointly. Finally, we show confidence intervals and control variates results. Detector-based sampling reduces error We first compare (detector-based sampling) to IS-Count and simple Monte Carlo sampling for estimating F(Ω), that is, the total counts of birds in a complete roosting season for a given station year, or damaged buildings in the entire disaster region. Fig. <ref> shows the error rate as a function of number of labeled samples (i.e., the number of distinct s_i sampled, since each s is labeled at most once). In the buildings application, a sample refers to an image tile of size 1024 × 1024 pixels, while for the birds a sample refers to a single day. Using the detector directly without any screening results in high error rates — roughly 136% and 149% for estimating the total count for the damaged buildings and bird counting tasks respectively. <cit.> show the advantages of using importance sampling with screening to produce count estimates with base covariates as opposed to simple Monte Carlo sampling (MC vs. IS-Count). For the bird counting task, we construct a non-detector covariate g_ by fitting a spline to f(s) with 10% of the days from an arbitrarily selected station-year pair (station KBUF in 2001). For the damaged building counting task, the covariate g_ is the true count of all buildings (independent of the damage) obtained using the labels provided with the xBD dataset. Covariate-based sampling (IS-Count) leads to significant savings over simple Monte Carlo sampling (MC), but provides further improvements. In particular, to obtain an error rate of 20% requires ≈ 1.6× fewer samples than IS-Count and ≈ 3× fewer samples than MC for both counting problems. Screening leads to a further reduction in labeling effort alleviates the need for users to annotate an image from scratch, such as identifying an object and drawing a bounding box around it. Instead, users only need to verify the detector's output, which tends to be a quicker process. In a study by <cit.> on the ImageNet dataset <cit.>, the median time to draw a bounding-box was found to be 25.5 seconds, whereas verification took only 9.0 seconds (this matches the screening time of ≈10s per bounding-box in <cit.>). The right side of Fig. <ref> presents earlier plots with the x-axis scaled based on labeling effort, computed as 100· c· n/|Ω|, where n denotes the number of screened samples and c ∈ [0,1] represents the fraction of time relative to labeling from scratch. For instance, the labeling effort is 100% when all elements must be labeled from scratch (c=1 and n=|Ω|). For , we estimate c_=9.0/(25.5 + 9.0)=0.26, since annotating from scratch requires both drawing and verification, while screening requires only verification. To achieve the same 20% error rate, requires 6× less effort than IS-Count and 9× less effort than MC for the bird counting task, and 8× less effort than IS-Count and 12× less effort than MC for building counting. Multiple counts can be estimated efficiently (k-) To solve multiple counting problems, we compared k-to using separately on each region. For bird counting, the task was to estimate four quarterly counts (cumulative or individual) as described in  <ref>. For k-, we sampled n=40 days from the complete season to estimate the counts simultaneously. For , we solved each of the four problems separately using n/4 = 10 samples per region for the same total number of samples. For building damage counting, the task was to estimate seven cumulative counts as described in  <ref>. For k-, we used n=70 images sampled from the entire domain, while for we used n/7 = 10 sampled images per region. Fig. <ref> shows that solving multiple counting problems jointly (k-) is better than solving them separately (). For the cumulative tasks, k-makes much more effective use of samples from overlapping regions. For single-quarter bird counts, k-has slightly higher error in Q1 and Q4 and lower errors in Q2 and Q3. This can be understood in terms of sample allocation: k-allocates in proportion to predicted counts, which provides more samples and better accuracy in Q2-Q3, when many more roosts appear, and approximates the optimal allocation of Claim <ref>. allocates samples equally, so has slightly lower error for the smaller Q1 and Q4 counts. In contrast, for building counting, k-has lower error even for the smallest region R1, since this has the most damaged buildings and thus gets more samples than . Fig. <ref> (left) shows k-outperforms simple Monte Carlo (adapted to multiple regions similarly to k-) for estimating cumulative daily bird counts as in Fig. <ref>. Confidence intervals We measure the width and coverage of the estimated confidence intervals (CIs) per number of samples for cumulative daily bird counting; see examples in Fig. <ref>. We compare the CIs of k-, k--cv (control variates), k--cv-σ(Ω) (using all samples to estimate variance), and simple Monte Carlo sampling in Fig. <ref>. When using control variates, the error rate and the CI width are slightly reduced while keeping the same coverage. CI coverage is lower than the nominal coverage (95%) for all methods, but increasing with sample size and substantially improved by k--cv-σ(Ω), which achieves up to ≈ 80% coverage. Importance weight distributions can be heavily right-skewed and the variance easily underestimated <cit.>. improves over a calibration baseline We implement a calibration baseline where the counts are estimated as F̂_(S) = ∑_s ∈ Sϕ̂(g(s)), where we learn an isotonic regression model ϕ̂ between the predicted and true counts trained for each station using 15 uniformly selected samples from one year from that station. Results are shown as the straight line in Fig. <ref> (left). outperforms calibration with less than 10 samples per station suggesting the difficulties in generalization across years using a simple calibration approach. Control variates (k--cv) We perform experiments adding control variates to k-in the roosting birds counting problem. We use the calibrated detector counts ϕ̂(g(s)) defined above as the control variate for each station year. Fig. <ref> shows that control variates reduce the confidence interval width (middle: k-vs. k--cv) without hurting coverage (right). In addition, the error of the estimate is reduced slightly, as shown in Fig. <ref> (left). Note that this is achieved with a marginal increase in the labeling effort. § DISCUSSION AND CONCLUSION We contribute methods for counting in large image collections with a detection model. When the task is complex and the detector is imperfect, allocating human effort to estimate the scientific result directly might be more efficient than improving the detector. For instance, performance gains from adding more training data may be marginal for a mature model. Our proposed solution produces accurate and unbiased estimates with a significant reduction in labeling costs from naive and covariate-based screening approaches. We demonstrate this in two real-world open problems where data screening is still necessary despite large investments in model development. Our approach is limited by the availability of a good detector, and confidence interval coverage is slightly low; possible improvements are to use bootstrapping or corrections based on importance-sampling diagnostics <cit.>. § ACKNOWLEDGEMENTS We thank Wenlong Zhao for the deployment of the roost detector, Maria Belotti, Yuting Deng, and our Colorado State University AeroEco Lab collaborators for providing the screened data of the Great Lakes radar stations, and Yunfei Luo for facilitating the building detections on the Palu Tsunami region. This work was supported by the National Science Foundation award #2017756. unsrtnat § DERIVATIONS §.§ IS-Count Take p(s) = 1/|S| and f̃(s) = |S| f(s), we want _p[f̃(s)] = ∑_s ∈ S1/|S| |S| f(s) = F(S). Importance sampling with proposal q gives F(S) = _p[f̃(s)] = _q[p(s)/q(s)f̃(s) ] = _q[1/|S|/q(s) |S| f(s) ] = _q[f(s)/q(s)] §.§ DISCount Take q = g̅_S in IS-Count, then F(S) = _q[f(s)/q(s)] = _g̅_S[ f(s)/g(s)/G(S)] = G(S) ·_g̅_S[ f(s)/g(s)] §.§ k-DISCount For any m > 0 we have [F̂_(S) | n(S) = m] = [ G(S) ·1/m∑_i=1^n w_i·[s_i ∈ S] | n(S) = m ] = G(S) ·1/m∑_i=1^n [ w_i ·[s_i ∈ S] | n(S) = m] = G(S) ·1/m∑_i=1^n [s_i ∈ S | n(S) = m] ·[w_i | s_i ∈ S, n(S) = m ] = G(S) ·1/m∑_i=1^n [s_i ∈ S | n(S) = m] ·[w_i | s_i ∈ S ] = G(S) ·1/m∑_i=1^n m/n·[w_i | s_i ∈ S ] = G(S) ·1/m∑_i=1^n m/n·F(S)/G(S) = F(S) In the third line, we used the fact that [h(X)·[X ∈ A]] = [x ∈ A] ·[h(X) | X ∈ A] for any random variable X and event A (see Lemma <ref> below). In the fourth line we used the fact that s_i is conditionally independent of n(S) given s_i ∈ S, since n(S) = [s_i ∈ S] + ∑_j ≠ i[s_j ∈ S] and the latter sum is independent of s_i. In the fifth line we used the fact that [s_i ∈ S | n(S) = m] = m/n because n(S) = ∑_j=1^n [s_j ∈ S] and the terms in the sum are exchangeable. In the sixth line we computed the conditional expectation as follows using the fact that the conditional density of s_i given s_i ∈ S is equal to g(s_i)/G(S): [w_i | s_i ∈ S ] = [f(s_i)/g(s_i) | s_i ∈ S] = ∑_s ∈ Sg(s_i)/G(S)·f(s_i)/g(s_i) = 1/G(S)∑_s ∈ S f(s) = F(S)/G(S). The unconditional bias of k-can also be analyzed: Let p(S) = [s_i ∈ S] = G(S)/G(Ω). The bias of the k-estimator is [F̂_(S)] - F(S) = - (1-p(S))^n. In particular, bias decays exponentially with n and quickly becomes negligible, with magnitude at most ϵ for n ≥log(1/r)/log(1/ϵ) and r=1-p(S). Further, the bias is easily computable from the detector counts and therefore known prior to sampling, and the event that leads to a biased estimate (n(S)=0) is observed after sampling. All these factors make bias a very minor concern.[The k-estimator can be debiased by dividing by u = 1-(1-p(S))^n < 1. However, this leads to higher overall error: if n(S) = 0, the estimator is unchanged, and conditioned on the event n(S) > 0 the estimator becomes biased and has higher variance by a factor of 1/u^2 > 1.] Using Claim <ref>, we compute the unconditional expectation as [F̂_(S)] = [n(S) = 0] ·[F̂_(S) | n(S) = 0] + [n(S) > 0] ·[F̂_(S) | n(S) > 0 ] = [n(S) > 0] · F(S) = (1 - (1- G(S)/G(Ω))^n) · F(S). In the final line, 1 - G(S)/G(Ω) is probability that s_i ∉ S for a single i, and (1 - G(S)/G(Ω))^n = [n(S) = 0] is the probability that s_i ∉ S for all i. Rearranging gives the result. lemmaLemma [h(X) ·[X ∈ A]] = [x ∈ A] ·[h(X) | X ∈ A] for any random variable X and event A. Observe [h(X) ·[X ∈ A]] = ∑_x [X=x] h(x) [x ∈ A] = ∑_x [X=x, X ∈ A] h(x) = [X ∈ A] ∑_x [X=x | X ∈ A ] h(x) = [X ∈ A] ·[h(X) | X ∈ A]. §.§ Optimal allocation of samples for to disjoint regions The proof is similar to that of Theorem 5.6 in <cit.>. We prove the claim for k=2; the proof generalizes to larger k in an obvious way. The variance of on S_i is (F̂_(S_i)) = G(S_i)^2 ·σ^2(S_i)/n_i = σ^2 G(S_i)^2 /n_i. We want to minimize ∑_i (F̂_(S_i)), which with k=2 is proportional to V = G(S_1)^2/n_1 + G(S_2)^2/n_2. By the Cauchy-Shwarz inequality, for any n_1, n_2 > 0, Vn = (G(S_1)^2/n_1 + G(S_2)^2/n_2)(n_1 + n_2) ≥(G(S_1) + G(S_2) )^2. If we substitute n_i = G(S_i)/Z for any Z on the left of the inequality and simplify, we see the inequality becomes tight, so the minimum is achieved. We further require ∑_i n_i = n, so choose Z so n_i = n·G(S_i)/G(Ω) = n p(S_i). §.§ k-variance Recall that p(S) = [s_i ∈ S] = G(S)/G(Ω) is the probability of a sample landing in S under the sampling distribution g̅_Ω. Define σ^2(S) = (f(s_i)/g(s_i) | s_i ∈ S) = ∑_s ∈ Sg(s)/G(S)·(f(s)/g(s) - F(S)/G(S))^2. to be the variance of the importance weight for s_i ∼g̅_S. Let r = 1 - p(S). The variance of the k-estimator is given by (F̂_(S)) = G(S)^2 ·σ^2(S) · (1-r^n) ·[ 1/n(S) | n(S) > 0] + F(S)^2 · r^n·(1-r^n). where (1-r)^n [ 1/n(S) | n(S) > 0] = ∑_j=1^n (1/j) · Binomial(j; n, p(S)). The second term in the variance arise from the possibility that no samples land in S; it decays exponentially in n and is negligible compared to the first term. The first term can be compared to the variance G(S)^2 ·σ^2(S) ·1/m of importance sampling with exactly m samples allocated to S and the proposal distribution g̅_S, i.e., . Because the sample size n(S) is random, the correct scaling factor for k-is (1-r^n) [1/n(S) | n(S) > 0], which it turns out is asymptotically equivalent to 1/(np(S)), i.e., with a sample size of m = n p(S) = [n(S)] — see Claim <ref> below. We find that for a small expected sample size (around 4) there can be up to 30% “excess variance” due to the randomness in the number of samples (see Figure <ref>), but that this disappears quickly with larger expected sample size. Let F̂_,n and F̂_,m be the k-and estimators with sample sizes n and m, respectively. The asymptotic variance of k-is given by lim_n →∞ n (F̂_,n(S)) = G(S)^2 ·σ^2(S)/p(S). This is asymptotically equivalent to with sample size m = [n(S)] = n p(S). That is lim_n →∞(F̂_,n(S))/(F̂_,⌈ n p(S) ⌉(S)) = 1. By the law of total variance, (F̂_(S)) = [(F̂_(S) | n(S))] + ([F̂_(S) | n(S)]). We will treat each term in Eq. <ref> separately. For the first term, from the definition of k-we see (F̂_(S) | n(S)) = 0 n(S)=0 G(S)^2 ·σ^2(S)/n(S) n(S) > 0 . Therefore [(F̂_(S) | n(S))] = G(S)^2 ·[n(S) > 0] [ σ^2(S)/n(S) | n(S) > 0] = G(S)^2 ·σ^2(S) · (1-r^n) ·[ 1/n(S) | n(S) > 0]. In the last line, we used the fact that n(S) ∼Binomial(n, p(S)), so [n(S) > 0] = 1-r^n where r = 1-p(S). The summation for (1-r^n)[1/n(S) | n(S) > 0] follows from the same fact. For the second term in Eq. (<ref>), from the definition of k-and conditional unbiasedness (Claim <ref>), we have [F̂_(S) | n(S)] = 0 if n(S)=0 F(S) if n(S) > 0 = F(S) ·Bernoulli( 1-r^n). The variance is therefore ([F̂_(S) | n(S)]) = F(S)^2 · r^n · (1-r^n). Putting the two terms together yields the result. By Claim <ref> we have lim_n →∞ n (F̂_,n(S)) = lim_n →∞ n · G(S)^2 ·σ^2(S) · (1-r^n) ·[ 1/n(S) | n(S) > 0] + lim_n →∞ n · F(S)^2 · r^n·(1-r^n). The second limit on the right side is zero, because n r^n → 0 as n →∞ (recall that r < 1) and the other factors are bounded. We will show the first limit on the right side is equal to G(S)^2 ·σ^2(S) / p(S), which will prove the first part of the result. The asymptotic expansion of <cit.> (Corollary 3) states that (1-r^n) [1/n(S) | n(S) > 0] = 1/n p(S) + 𝒪(1/(n p(S))^2). Using this expansion in the limit gives: lim_n →∞ n · G(S)^2 ·σ^2(S) · (1-r^n) · [ 1/n(S) | n(S) > 0] = lim_n →∞ G(S)^2 ·σ^2(S) / p(S) · ( 1 + 𝒪(1/n) ) = G(S)^2 ·σ^2(S) / p(S) The variance of with sample size m is (F̂_, m(S)) = G(S)^2 ·σ^2(S) / m. Setting m = ⌈ n p(S)⌉ and using the second to last line above we have lim_n →∞(F̂_,n(S))/(F̂_,⌈ n p(S) ⌉(S)) = lim_n →∞n (F̂_,n(S))/n (F̂_,⌈ n p(S) ⌉(S)) = lim_n →∞G(S)^2 ·σ^2(S)/p(S) · (1 + 𝒪(1/n))/n · G(S)^2 ·σ^2(S) / ⌈ n p(S) ⌉ = 1. §.§ Control Variates Recall that with control variates the weight is redefined as w_i = (f(s_i) - h(s_i))/g(s_i). The expectation of the weight given s_i ∈ S is [w_i | s_i ∈ S] = ∑_s ∈ Sg(s)/G(S)f(s) - h(s)/g(s) = F(S)-H(S)/G(S) Therefore [w̅_cv(S) | n(S) > 0] = F(S)-H(S)/G(S) Therefore [F̂_ cv(S) | n(S) > 0] = G(S) ·F(S)-H(S)/G(S) + H(S) = F(S) §.§ Confidence Intervals Let w_1, w_2, … be an iid sequence of importance weights for samples in S, i.e., w_i = f(s_i)/g(s_i) for s_i ∼g̅_S. Each weight w_i has mean F(S)/G(S) and variance σ^2(S). Let ω̅_n = 1/n∑_i=1^n w_i. By the central limit theorem, √(n)(ω̅_n - F(S)/G(S)) 𝒩(0, σ^2(S)) Recall that F̂_,n(S) = G(S) ·w̅_n(S) where w̅_n(S) is the average of the importance weights for samples that land in S when drawn from all of Ω (for clarity in the proof we add subscripts for sample size to all relevant quantities). It is easy to see that w̅_n(S) is equal in distribution to ω̅_n(S) where n(S) ∼Binomial(n, p(S)) and n(S) is independent of the sequence of importance weights — this follows from first choosing the number of samples that land in S and then choosing their locations conditioned on being in S. From Theorem 3.5.1 of <cit.> (with N_n = n(S) and c_n = n p(S)) it then follows that √(n(S))(w̅_n(S) - F(S)/G(S)) 𝒩(0, σ^2 (S)) Rearranging yields F̂_(S) - F(S)/G(S)/√(n(S))𝒩(0, σ^2 (S)) After dividing by σ̂_n(S), the result follows from Slutsky's lemma if σ̂^2_n(S) σ^2(S), which follows from a similar application of Theorem 3.5.1 of <cit.>. § COUNTING TASKS § PALU TSUNAMI REGIONS
http://arxiv.org/abs/2306.17845v1
20230630175914
Majorana-mediated thermoelectric transport in multiterminal junctions
[ "Raffael L. Klees", "Daniel Gresta", "Jonathan Sturm", "Laurens W. Molenkamp", "Ewelina M. Hankiewicz" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
These authors contributed equally to this work. Institute for Theoretical Physics and Astrophysics and Würzburg-Dresden Cluster of Excellence ct.qmat, Julius-Maximilians-Universität Würzburg, D-97074 Würzburg, Germany These authors contributed equally to this work. Institute for Theoretical Physics and Astrophysics and Würzburg-Dresden Cluster of Excellence ct.qmat, Julius-Maximilians-Universität Würzburg, D-97074 Würzburg, Germany Institute for Theoretical Physics and Astrophysics and Würzburg-Dresden Cluster of Excellence ct.qmat, Julius-Maximilians-Universität Würzburg, D-97074 Würzburg, Germany Experimental Physics III Julius-Maximilians-Universität Würzburg, D-97074 Würzburg, Germany Institute for Topological Insulators, Julius-Maximilians-Universität Würzburg, D-97074 Würzburg, Germany Institute for Theoretical Physics and Astrophysics and Würzburg-Dresden Cluster of Excellence ct.qmat, Julius-Maximilians-Universität Würzburg, D-97074 Würzburg, Germany The unambiguous identification of Majorana zero modes (MZMs) is one of the most outstanding problems of condensed matter physics. Thermal transport provides a detection tool that is sensitive to these chargeless quasiparticles. We study thermoelectric transport between metallic leads transverse to a Josephson junction. The central double quantum dot hosts conventional or topological Andreev states that depend on the phase difference ϕ. We show that the presence of MZMs can be identified by a significant amplification of both the electrical and thermal conductance at ϕ≈π as well as the Seebeck coefficient at ϕ≈ 0. We further investigate the robustness of our results against Cooper pair splitting processes. Majorana-mediated thermoelectric transport in multiterminal junctions Ewelina M. Hankiewicz July 31, 2023 ===================================================================== Introduction.—Josephson junctions (JJs) have been extensively studied in numerous works, driven by their wide range of applications, from metrology <cit.> and quantum simulation <cit.> to quantum computation <cit.>. Recently, topological JJs gained significant attention as they provide robust platforms hosting Majorana zero modes (MZMs) <cit.>. In particular, similar quantum-dot-based setups have demonstrated promising potential as platforms for flying qubits <cit.>, as these systems feature well-established quantum interferometers <cit.>. Even though the unequivocal detection of MZMs remains an open problem, several approaches have been proposed to investigate the topological nature of JJs, such as analyzing the current-phase relation in conventional and topological superconductors <cit.>. While the quantized electrical conductance was initially considered an exclusive feature of topological materials, it was later realized that it can arise from any zero-energy mode <cit.>. In contrast, thermal transport provides a sensitive tool to detect MZMs due to their neutral charge, with its signatures manifested in the thermal conductance <cit.> and voltage thermopower <cit.>. In quantum-dot-based multiterminal setups, it was reported that a possible smoking gun to detect MZMs is an opposite sign-behavior of the Seebeck coefficient as a function of the energy level of the quantum dot (QD) compared to a conventional superconductor <cit.>. In this Letter, we study the multiterminal system depicted in Fig. <ref>(a), which also represents a generalization of setups that investigate Cooper pair splitting (CPS) <cit.>. The central region consists of two noninteracting coupled QDs, each of them connected to one metallic lead and both are connected to two s-wave or p-wave superconducting leads. In contrast to previous discussions aforementioned, we find that the presence of a MZM does not generally result in the sign change of the Seebeck coefficient. Therefore, alternative methods are required for their definitive identification. For this purpose, we propose a measurement of the transverse thermoelectric transport coefficients as a function of the phase difference ϕ = φ_B - φ_T between the bottom (B) and top (T) superconductors. In particular, we show that the linear-response signals of both the electrical and thermal conductance around ϕ≈π as well as the Seebeck coefficient around ϕ≈ 0 show a huge amplification in the presence of MZMs. Double-quantum-dot model.—We consider the four-terminal junction shown in Fig. <ref>(a) with a central region formed by a double QD. Both QDs are considered to be spin-degenerate and noninteracting described by the Hamiltonian H_QD = ∑_α,σε_α d_ασ^† d_ασ, where d_ασ^(†) annihilates (creates) an electron of spin σ∈{↑, ↓} at the onsite energy ε_α∈ℝ of QD α∈{L,R}. The two QDs are coupled to each other by V_QD = -w ∑_σ (d_Lσ^† d_Rσ + d_Rσ^† d_Lσ ), where w ≥ 0 is the interdot coupling strength. The full model including the four leads is represented by the Hamiltonian H = H_QD + V_QD + H_leads + H_tunnel, where H_leads describes the four terminals that are each modeled by semi-infinite chains <cit.>. The details of the model are presented in Secs. I and II of the Supplemental Material (SM) <cit.>. The tunneling between the leads and the central system is described by H_tunnel, characterized by the coupling Γ_N,S for normal and superconducting leads, respectively. To study the effect of topological superconductivity on transverse thermoelectric transport, the superconductors are both either of conventional s-wave or topological p-wave type. In the latter case, we consider the whole system as spinless and the p-wave terminals are modelled by semi-infinite Kitaev chains <cit.>. As sketched in Fig. <ref>(b), the parameter Γ_CPS describes the nonlocal splitting of Cooper pairs into two electrons, each of them being transferred to a different QD. In general, Γ_CPS depends on the geometry of the contact as well as the coherence length of the Cooper pairs <cit.> and allows the modeling of more realistic situations. Transmission functions and band inversion.—We study the transmission functions between the left and right normal-metal electrodes as an effective two-terminal setup. Due to the connection to the top and bottom superconductors, the central double QD hosts Andreev bound states (ABS) that mediate the transverse thermoelectric transport. As shown in Sec. III of the SM <cit.>, the transmission functions between the terminals ℓ_1,ℓ_2 ∈{ L,R } in the spinless p-wave case read 𝒯_ℓ_1 ℓ_2^τ_1 τ_2(ε) = 4 Γ_N^2 G^r_ℓ_1τ_1, ℓ_2 τ_2 (ε) G^a_ℓ_2 τ_2 , ℓ_1 τ_1 (ε) , where G^r,a is the dressed retarded/advanced Green's function of the double QD and τ_1, τ_2 ∈{e,h} are the electron-hole components in Nambu space. For two s-wave terminals, all Green's functions are spin-symmetric due to the absence of spin-flip terms, which results in an additional factor of 2 in the transmission functions in Eq. (<ref>). Due to particle-hole symmetry, the transmission functions satisfy 𝒯_ℓ_1 ℓ_2^τ_1 τ_2(ε) = 𝒯_ℓ_1 ℓ_2^τ̅_1 τ̅_2(-ε), with τ̅ = h,e if τ = e,h. This allows us to focus on electron cotunneling (EC) 𝒯_RL^ee, local Andreev reflection (LAR) 𝒯_RR^eh, and crossed Andreev reflection (CAR) 𝒯_RL^eh for the following discussion. The transverse thermoelectric properties of the junction are determined by these transmission functions, as presented in Eqs. (<ref>)-(<ref>) below. The main differences between conventional s-wave and topological p-wave terminals are best observed in the EC transmission function 𝒯_RL^ee and in the local density of states (LDOS) of the central region, defined as Tr[ Im(G^a)], where the trace runs over site and particle-hole degrees of freedom. Therefore, we first show in Fig. <ref>(a) the LDOS as a function of the phase difference ϕ for strong CPS (Γ_CPS = Γ_S) and symmetric dot levels (ε_L = ε_R). For both conventional and topological leads, we find a single pair of non-dispersive states as well as a single pair of ABS at energies |ε| < Δ, where Δ is the superconducting order parameter of the top and bottom terminals. While the ABS are changing with ϕ, the constant states are fixed at energies ε≈± (ε_avg + w) due to the strong interference enabled by Γ_CPS = Γ_S, where ε_avg = (ε_L + ε_R)/2 is the average energy level of the double QD. Generally, the ABS in the s-wave case are gapped around ϕ = π, while there is always a protected MZM at ϕ = π for p-wave terminals. In Fig. <ref>(b), we show the EC transmission function 𝒯_RL^ee. In both cases, the electronic constant resonant state shows a strong transmission at ε≈ε_avg + w and increasing w > 0 leads to a shift of the constant resonant line toward positive energies. Pushing these states further to energies |ε| > Δ reveals the resonant behavior of the dispersing ABS. While there is strong electron transmission for the ABS at negative energy in the s-wave case, the p-wave configuration shows a dominant electron transmission at positive energies. This relative inverse behavior is a strong signature of band inversion in topological p-wave superconductors. However, the overall behavior in both cases is not fixed to be particle- or hole-like and can be changed by changing the QD energies. In the symmetric case ε_L = ε_R, the inversion point follows the simple condition w =|ε_avg|, as shown in Sec. IV of the SM <cit.>. For asymmetric values, ε_L ≠ε_R, the constant energy states hybridize with the dispersing ABS. However, as shown in Sec. IV of the SM <cit.>, the transmission function is then still dominated by a resonant line at energies ε≈ε_avg + w. This neither changes the qualitative behavior nor our previous discussion. Finally, smaller Γ_CPS < Γ_S, as being of relevance in CPS experiments <cit.>, will lead to the appearance of a second pair of dispersing ABS, while the formerly constant energy states start to merge with the continuum |ε| > Δ; see Sec. IV of the SM <cit.>. In that sense, this parameter interpolates between double-dot and effective (multilevel) single-dot behavior, the latter defined by Γ_CPS = 0 [cf. Fig. <ref>(b)]. Thermoelectric response.—We are interested in the thermoelectric charge and heat currents, I_R and J_R, respectively, in the right normal-metal terminal in linear response. Note that we are only interested in the low-energy response around the Fermi energy with an equilibrium chemical potential μ = 0 in superconducting systems. We consider the small voltage or temperature bias, δ V or δ T, respectively, to be applied to the right terminal, which implies T_L = T, μ_L = 0, T_R = T + δ T, and μ_R = e δ V. Then, the Onsager relations read <cit.> [ I_R / e; J_R / (k_B T) ] = 1/h[ ℒ_11 ℒ_12; ℒ_21 ℒ_22; ][ e δ V; k_B δ T ] , where ℒ = (ℒ_mn)_m,n=1,2 is the Onsager matrix with ℒ_mn = ∫_-∞^∞(ε/k_B T)^m+n-2𝒯_mn(ε) ( - ∂ f/∂ε) dε . Here, f(ε) = (1 + e^ε / (k_B T) )^-1 is the equilibrium Fermi function and T,k_B,e,h > 0 are the temperature, Boltzmann constant, elementary charge, and Planck constant, respectively. Using Eq. (<ref>) and particle-hole symmetry, the total transmission functions read <cit.> 𝒯_11 = 𝒯_21 = 𝒯_RL^ee + 𝒯_RL^eh + 2𝒯_RR^eh , 𝒯_12 = 𝒯_22 = 𝒯_RL^ee + 𝒯_RL^eh . The linear electrical conductance and the Peltier coefficient, measured at thermal equilibrium δ T = 0, read G = G_0 ℒ_11 and Π = k_B T ℒ_21 / (e ℒ_11), respectively, where G_0 = e^2 / h is the electrical conductance quantum. The linear thermal conductance and Seebeck coefficient, measured at I_R = 0, read K = 3 K_0 det(ℒ) / (π^2 ℒ_11) and S = k_B ℒ_12 / (e ℒ_11), respectively, where K_0 = π^2 k_B^2 T / (3h) is the thermal conductance quantum <cit.>. Since electrons and holes from the same terminal see the same QD, 𝒯_RR^eh will always be a symmetric function of energy ε resulting in ℒ_12 = ℒ_21 and Π = T S. As a remark, note that the Wiedemann-Franz law <cit.>, K ∝ G, which would imply ℒ_22∝ℒ_11 if ℒ_12≪ℒ_11 at low temperature, is generally violated due to LAR contributions to I_R if the voltage bias is applied to the right terminal <cit.>. In contrast, a temperature gradient does not generate LAR contributions to the thermal current J_R. For completeness, we show this violation in Sec. IV of the SM <cit.>, which is strong (weak) around phase differences ϕ≈π for p-wave (s-wave) terminals. Seebeck coefficient.—In general, the Seebeck coefficient S is a measure of whether the thermoelectric transport through a system is particle- or hole-dominated. In Fig. <ref>, we first discuss the influence of band inversion on S for both the s- and p-wave case (solid and dashed curves, respectively). We turn our attention to the two characteristic phase differences ϕ = 0 and ϕ = π, for which S is the largest, as a function of ε_L for different Γ_CPS. On a qualitative level, changing Γ_CPS does not change too much the general behavior of S in the s-wave case, while there is a strong dependence for two p-wave terminals. In the latter case for strong Γ_CPS (dashed blue lines in Fig. <ref>), the magnitude of the Seebeck coefficient is the same regardless of the phase difference and the region in which the sign change appears is very narrow. This robust behavior can be explained with the presence of the resonant non-dispersing level that appears at the energy ε≈ε_avg + w. The only qualitative difference between the dashed blue lines in Fig. <ref> is the presence of the MZM at ϕ = π [Fig. <ref>(b)] with a large conductance G, which leads to a suppression of S ∝ G^-1 everywhere else. When Γ_CPS < Γ_S, the p-wave case can be well distinguished from the s-wave case with the aid of the phase difference ϕ. Due to the small conductance, the Seebeck coefficient at ϕ = π in the s-wave case is one order of magnitude larger [orange and black solid curves in Fig. <ref>(a)] than the similar topological case [orange and black dashed curves in Fig. <ref>(b)]. In contrast at ϕ = 0, the topological case shows a huge Seebeck response [orange and black dashed curves in Fig. <ref>(a)] for the same reason, while the corresponding conventional case is one order of magnitude smaller [orange and black solid curves in Fig. <ref>(b)]. Furthermore, note that S in Fig. <ref>(a) is solely determined by the EC transmission function 𝒯_RL^ee <cit.>, as LAR and CAR are absent for these phase differences at the symmetric choice of the direct couplings to the superconductors [cf. Fig. <ref>] and remain negligible for small asymmetries. However, LAR is not negligible in the conductance at other phases [i.e., Fig. <ref>(b)] and it is dominant in the p-wave case due to the MZM <cit.>. Effect of the phase difference and CPS.—As we show in the following, measuring the thermoelectric coefficients in combination with tuning the applied phase difference across the JJ allows to identify the presence of MZMs. For this purpose, we show in Fig. <ref> the electrical conductance G, the Seebeck coefficient S, and the thermal conductance K for different JJs. As a first observation in the s-wave case [Fig. <ref>(a)], G, S, and K show only an overall weak signal that is largest at ϕ = π. While both G and K do not significantly change on a qualitative level for different Γ_CPS, S shows a sign change around ϕ≈π as Γ_CPS is increased. In contrast, the magnitude of the signals in the p-wave case [Fig. <ref>(b)] are way larger compared to the s-wave case. In particular, G shows the expected quantized conductance peak from LAR at the MZM at ϕ = π with G = 2 G_0 <cit.> that is also robust against increasing Γ_CPS. At Γ_CPS = Γ_S, G is generally not quantized, but drops to G = G_0 in the case of symmetric couplings and equal dot levels ε_L = ε_R <cit.>. Furthermore, K only shows a significant signal around phase differences close to ϕ≈π for a large range of Γ_CPS≳ 0.3 Γ_S that increases for increasing Γ_CPS. This behavior is a consequence of the violation of the Wiedemann-Franz law due to the presence of LAR; see Eq. (<ref>) and the SM <cit.>. Although K is zero at exactly ϕ = π for a large range of Γ_CPS, it increases to the quantized value of K = K_0/2 in the case of symmetric coupling and equal dot energy <cit.>, which was also reported in Ref. <cit.> for a continuous setup. In addition, S takes large values for small Γ_CPS at ϕ≈ 0 due to the vanishingly small electrical conductance. This region [dark red area for S in Fig. <ref>(b)] also extends to larger phase differences at large Γ_CPS≲Γ_S. Conclusion.—In the four-terminal setup shown in Fig. <ref>, we have investigated how ABS mediate the thermoelectric transport between two normal-metal contacts transverse to a Josephson junction. In contrast to previous claims <cit.>, we have shown that an inverted Seebeck coefficient is not a unique feature of MZMs and that ABS coexisting with resonant states [Fig. <ref>] or the strength of CPS in multi-dot systems also induce sign changes even in conventional JJs [Figs. <ref> and <ref>(a)]. Moreover, we have compared the linear thermoelectric transport coefficients between conventional and topological JJs [Fig. <ref>]. In general, the signals in the conventional case are expected to be one to three orders of magnitude smaller than for the topological case. In particular, both the electrical and thermal conductance are strongly amplified at ϕ≈π in the presence of MZMs, while the Seebeck coefficient is large around ϕ≈ 0 due to the vanishingly small conductance. This should represent a phase-sensitive and robust detection criterion for MZMs in such devices. There are still numerous open questions in these types of systems, e.g., how these results connect to larger-scale continuous junctions or how the Andreev states in a mixed s- and p-wave situation influence the thermoelectric properties. We acknowledge Martin Stehno and Yi-Ju Ho for fruitful discussions. We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through SFB 1170, Project-ID 258499086, through the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter – ct.qmat (EXC2147, Project-ID 390858490). L.W.M. acknowledges support from the Free State of Bavaria for the Institute for Topological Insulators. § SUPPLEMENTAL MATERIAL §.§ I. Boundary Green's functions of the four terminals The left (L), right (R), top (T), and bottom (B) leads in the four-terminal junction that is sketched in Fig. <ref>(a) of the main text are all modeled by semi-infinite tight-binding chains, generally described by the set of Hamiltonians H_leads = H_L^(n) + H_R^(n) + H_T^(s/p) + H_B^(s/p). We assume that the left and right terminals are normal metallic electrodes (superscript n), while the top and bottom leads are superconductors of either conventional s-wave or topological p-wave type (superscript s or p). The Hamiltonian H_ℓ^(s) for the conventional s-wave superconductors is given by the tight-binding chain H_ℓ^(s) = ∑_j[ -t ∑_σ ( c_ℓ,j,σ^† c_ℓ,j+1,σ + c_ℓ,j+1,σ^† c_ℓ,j,σ ) + Δ e^iφ_ℓ c_ℓ,j,↑^† c_ℓ,j,↓^† + Δ e^-iφ_ℓ c_ℓ,j,↓ c_ℓ,j,↑] = ∑_k (c_ℓ,k,↑^†, c_ℓ,-k,↓) ( -2 t cos(k) τ_3 + Δ e^iφ_ℓτ_3τ_1 ) [ c_ℓ,k,↑; c_ℓ,-k,↓^† ], where c_ℓ,j,σ = ∑_k c_ℓ, k,σ e^- i j k / √(N) annihilates and c_ℓ,j,σ^† = ∑_k c^†_ℓ, k,σ e^i j k / √(N) creates an electron of spin σ∈{↑, ↓} on site j in terminal ℓ = T,B. Furthermore, t > 0 is the hopping parameter that is related to the lattice spacing defining the effective bandwidth in the model, the number of sites is N →∞, Δ≥ 0 is the superconducting order parameter, and φ_ℓ is its phase. We also introduced a set of Pauli matrices τ_1, τ_2, and τ_3 in Nambu space, with τ_0 being the identity. If the top and bottom superconducting terminals are of topological p-wave type, we describe them by spinless Kitaev chains (at zero onsite potential) with the Hamiltonian <cit.> H_ℓ^(p) = 1/2∑_j [ -t (c^†_ℓ, j c^†_ℓ, j+1 + c^†_ℓ, j+1 c_ℓ, j ) + Δ e^iφ_ℓ c^†_ℓ, j c^†_ℓ, j+1 + Δ e^- iφ_ℓ c_ℓ, j+1^† c_ℓ, j^†] = 1/2∑_k ( c^†_ℓ,k , c_ℓ,-k ) ( - t cos(k) τ_3 + Δsin(k) e^iφ_ℓτ_3τ_2 ) [ c_ℓ,k; c^†_ℓ,-k ] . Note that the topological p-wave terminal is considered to be spinless and, hence, c_ℓ,j^(†) does not depend on spin. In both the s-wave and p-wave case, the normal-metal Hamiltonian H_ℓ^(n) (ℓ = L,R) follows from H_ℓ^(s) for Δ = 0. Following standard Green's function techniques <cit.>, we obtain the bare matrix boundary Green's functions [in their respective basis defined in Eqs. (<ref>) and (<ref>)] for the semi-infinite terminals as g^(n)_ℓ(z) = - i/tsgn(Im(z))τ_0, ℓ = L,R , g_ℓ^(s)(z) = - (z τ_0 + Δ e^i φ_ℓτ_3τ_1) /t √(Δ^2 - z^2 ) , ℓ = T,B , g_ℓ^(p)(z) = √(Δ^2 - z^2 )τ_0 + Δ_ℓ e^i φ_ℓτ_3τ_1 /t z , ℓ = T,B , where z = ε + i η, with energy ε∈ℝ and a small Dynes parameter η→ 0^± <cit.>. In our numerical calculations we used |η| = 10^-3Δ for Figs. <ref> and <ref> and |η| = 10^-5Δ for Fig. <ref> in the main text. The retarded and advanced Green's functions are defined for η > 0 and η < 0, respectively. We further assumed that the band parameter t, which also plays the role of the inverse normal-metal density of states at the Fermi energy, is the largest energy scale in the total system (wide-band approximation) <cit.>. Finally, note that the order parameter in the topological case reads Δ_T = Δ and Δ_B = - Δ due to the p-wave nature of the superconducting pairing <cit.>. In the wide-band limit, the normal-metal Green's function g^(n)_ℓ(z) in Eq. (<ref>) is constant in energy, while the conventional superconductor described by g_ℓ^(s)(z) in Eq. (<ref>) shows the characteristic BCS singularities at the energies ε = ±Δ. In contrast, the topological superconductor described by g_ℓ^(p)(z) in Eq. (<ref>) hosts a single Majorana state at ε = 0. §.§ II. Dressed central double-quantum dot and couplings As explained in the main text, the central quantum-dot system is described by an interconnected double-quantum dot. In the spinful (i.e., s-wave) case, the two quantum dots are considered to be spin-degenerate and noninteracting, as described by the Hamiltonian H_QD^(s) = ∑_α∑_σε_α d_ασ^† d_ασ - w ∑_σ (d_Lσ^† d_Rσ + d_Rσ^† d_Lσ ) = (d_L↑^† , d_L↓ , d_R↑^† , d_R↓ ) [ [ ε_L -w; -w ε_R ]⊗τ_3 ] _Ĥ_QD[ d_L↑; d_L↓^†; d_R↑; d_R↓^† ] , where d_ασ^(†) annihilates (creates) an electron of spin σ∈{↑, ↓} at the onsite energy ε_α∈ℝ of QD α∈{L,R} and w ≥ 0 is the coupling strength between the two quantum dots. For the spinless p-wave case, we get the similar result H_QD^(p) = ∑_αε_α d_α^† d_α - w (d_L^† d_R + d_R^† d_L ) = 1/2 (d_L^† , d_L , d_R^† , d_R ) Ĥ_QD[ d_L; d_L^†; d_R; d_R^† ] . As sketched in Fig. <ref>(a) in the main text, the left (right) normal-metal terminal is only coupled to the left (right) quantum dot, while the top and bottom superconducting terminals are each coupled to both quantum dots simultaneously. The full coupling Hamiltonian between the terminals and the central quantum dot system reads H_tunnel^(s) = ∑_ℓ = L,R,T,B H_ℓ D^(s), where in the s-wave case H_ℓ D^(s) = ∑_ k σ (-t_ℓ d_ℓσ^† c_ℓ k σ + h.c.) = ∑_k( (d_ℓ↑^† , d_ℓ↓) V_ℓ[ c_ℓ k ↑; c_ℓ,-k,↓^† ] + h.c.), ℓ = L,R , H_ℓ D^(s) = ∑_α = L,R∑_ k σ (-t_ℓα d_ασ^† c_ℓ k σ + h.c.) = ∑_k( (d_α↑^† , d_α↓) V_ℓα[ c_ℓ k ↑; c_ℓ,-k,↓^† ] + h.c.), ℓ = T,B , and in the p-wave case H_ℓ D^(p) = ∑_ k (-t_ℓ d_ℓ^† c_ℓ k + h.c.) = 1/2∑_k( (d_ℓ^† , d_ℓ) V_ℓ[ c_ℓ k; c_ℓ,-k^† ] + h.c.), ℓ = L,R , H_ℓ D^(p) = ∑_α = L,R∑_ k (-t_ℓα d_α^† c_ℓ k + h.c.) = 1/2∑_α = L,R∑_ k ( (d_α^† , d_α) V_ℓα[ c_ℓ k; c_ℓ,-k^† ] + h.c.), ℓ = T,B . We also defined the coupling matrices V_ℓ = -t_ℓτ_3 and V_ℓα = -t_ℓατ_3, where t_ℓ, t_ℓα≥ 0. We use the Dyson equation G = g + gΣ G to calculate the dressed retarded and advanced Green's function Ĝ_CC of the central system as Ĝ_CC(z) = [ G_LL(z) G_LR(z); G_RL(z) G_RR(z) ] = (z 1̂_4 - Ĥ_QD - Σ̂(z) )^-1, where 1̂_4 is a 4× 4 unit matrix, z = ε + i η [cf. Eq. (<ref>)], and the self-energy matrix is given by Σ̂(z) = [ Σ_LL(z) Σ_LR(z); Σ_RL(z) Σ_RR(z) ] , where Σ_ℓℓ'(z) = δ_ℓℓ' V_ℓ g_ℓ^(n)(z) V_ℓ + ∑_α = T,B V_α,ℓ g_α^(s/p)(z) V_α,ℓ' and δ_ℓℓ' is the Kronecker-δ, which simplifies to Σ_ℓℓ'(z) = - i δ_ℓℓ't_ℓ^2 /tsgn(Im(z)) τ_0 + ∑_β = T,B- t_βℓ t_βℓ' (z τ_0 - Δ e^i φ_βτ_3τ_1)/ t √(Δ^2 - z^2) (s-wave) , Σ_ℓℓ'(z) = - i δ_ℓℓ't_ℓ^2 /tsgn(Im(z)) τ_0 + ∑_β = T,Bt_βℓ t_βℓ' (√(Δ^2 - z^2)τ_0 - Δ_β e^i φ_βτ_3τ_1)/ t z (p-wave) . In Eq. (<ref>), the origin of the first term is the coupling of the left (right) quantum dot to the left (right) normal-metal terminal, while the second term arises due to the coupling of both dots to both top and bottom superconducting leads. In addition, the coupling of both dots to a single superconductor adds nonlocal terms Σ_LR(z) and Σ_RL(z) that lead Cooper pair splitting (CPS) into two electrons. As introduced in the main text, the normal-metal terminals are coupled to the double quantum dot with the effective couplings Γ_ℓ = t_ℓ^2 / t ≥ 0 (ℓ = L,R), while the superconducting terminals are locally coupled to the double quantum dot with the effective couplings Γ_β,ℓ = t_β,ℓ^2 / t ≥ 0 (β = T,B). In addition, the nonlocal couplings Γ_β,NL = t_β,L t_β,R/t ≥ 0 control the CPS. Since in general experimental situations we have Γ_β,NL < √(Γ_β,LΓ_β,R), we consider Γ_β,NL as an independent parameter in our theory. In the main text, we choose symmetric couplings for simplicity: Γ_N := Γ_L = Γ_R, Γ_S := Γ_T,L = Γ_T,R = Γ_B,L= Γ_B,R, and Γ_CPS := Γ_T,NL = Γ_B,NL. Note that for the symmetric choice of couplings and the assumption of equal magnitude of the pairings, the self-energies simplify to Σ_ℓℓ'(z) = - i δ_ℓℓ't_ℓ^2 /tsgn(Im(z)) τ_0 - t_S,ℓ t_S,ℓ'/ t √(Δ^2 - z^2)( 2 z τ_0 - Δ [ e^i φ_T τ_3 + e^i φ_B τ_3 ] τ_1 ) (s-wave) . Σ_ℓℓ'(z) = - i δ_ℓℓ't_ℓ^2 /tsgn(Im(z)) τ_0 + t_S,ℓ t_S,ℓ'/ t z (2 √(Δ^2 - z^2)τ_0 - Δ [e^i φ_T τ_3 - e^i φ_B τ_3 ] τ_1) (p-wave) . Hence, there is no effect of superconductivity at a phase difference ϕ = φ_B - φ_T = π in the s-wave case, while the same effect happens at a phase difference ϕ = φ_B - φ_T = 0 in the p-wave case, which leads to the absence of both local and crossed Andreev reflection at these phase differences. §.§ III. Transmission functions for charge and heat currents In the following, we focus on the electric current I_R and the heat current J_R in the right normal-metal contact that are generated by applying a voltage or thermal bias to the right contact. Since we do not consider a voltage bias between the superconducting terminals, these currents will be stationary (i.e., time-independent). Hence, by starting from the definition of the currents I_R = - e ⟨ d N_R / dt ⟩ and J_R = ⟨ d (H_R-μ_R N_R) / dt ⟩, with the elementary charge e>0 and N_R, H_R, and μ_R being the particle number operator, the Hamiltonian, and the chemical potential in the right contact, respectively, and by using a Fourier transformation to energy space, we get [ I_R; J_R ] = 1/2h∫_-∞^∞Tr{[ e τ_3; ετ_0 - μ_R τ_3 ][ V_RC G_CR^<(ε) - G_RC^<(ε) V_CR] } dε . Note that the trace is taken over the particle-hole (Nambu) degrees of freedom. Furthermore, h is the Planck constant, G_RC^<(ε) and G_CR^<(ε) are lesser dressed Green's functions, and V_ℓ C = V_C ℓ = -t_ℓτ_3 are hoppings between the central quantum dot and the terminals. We use the Dyson equations for the lesser Green's function <cit.>, G_CR^< = G_CC^< V_CR g_R^a + G^r_CC V_CR g_R^< , G_RC^< = g_R^< V_RC G_CC^a + g^r_R V_RC G_CC^< , where the superscripts r and a are for the retarded and advanced Green's functions, respectively. Using the general relation G^< - G^> = G^a - G^r that holds for both dressed and bare Green's functions <cit.>, Eq. (<ref>) becomes [ I_R; J_R ] = 1/2h∫_-∞^∞Tr{[ e τ_3; ετ_0 - μ_R τ_3 ][ V_RC G^>_CC V_CR g_R^< - V_RC G_CC^< V_CR g_R^> ] } dε . Finally, we use the symmetric version of the Dyson equation for the dressed lesser and greater Green's function <cit.> G_CC^<,> = ∑_ℓ = L,R G^r_CC V_Cℓ g^<,>_ℓ V_ℓ C G^a_CC to arrive at [ I_R; J_R ] = 1/2h∑_ℓ = L,R∫_-∞^∞Tr{[ e τ_3; ετ_0 - μ_R τ_3 ] V_RC G^r_CC V_Cℓ[ g^>_ℓ V_ℓ C G^a_CC V_CR g_R^< - g^<_ℓ V_ℓ C G^a_CC V_CR g_R^> ] } dε . Note that we neglect the current contributions from and into the superconductors in Eq. (<ref>) since we are only interested in linear response around zero energy. The uncoupled normal terminals are in equilibrium, in which the bare lesser and greater Green's function is given by g_ℓ^< = (g_ℓ^a - g_ℓ^r) [ f_ℓ^e 0; 0 f_ℓ^h ] (<ref>)=2i/t[ f_ℓ^e 0; 0 f_ℓ^h ] , g_ℓ^> = -(g_ℓ^a - g_ℓ^r) [ 1-f_ℓ^e 0; 0 1-f_ℓ^h ] (<ref>)= - 2i/t[ 1- f_ℓ^e 0; 0 1- f_ℓ^h ] . Here, f^e,h_ℓ(ε) = 1/[1+e^(ε∓μ_ℓ)/(k_B T_ℓ)] is the Fermi function for electrons and holes, respectively, of the left and right terminals, which are at chemical potential μ_ℓ and temperature T_ℓ, and k_B is Boltzmann's constant. Substituting Eq. (<ref>) into Eq. (<ref>) and taking the trace over particle-hole space, we finally obtain [ I_R; J_R ] = 1/2h∫_-∞^∞[ e ( 𝒯_RL^ee (f_R^e - f_L^e) + 𝒯_RL^eh (f_R^e - f_L^h) - 𝒯_RL^he (f_R^h - f_L^e) - 𝒯_RL^hh (f_R^h - f_L^h) ); (ε-μ_R) [𝒯_RL^ee (f_R^e-f_L^e) + 𝒯_RL^eh (f_R^e-f_L^h) ] + (ε+μ_R) [𝒯_RL^he (f_R^h-f_L^e) + 𝒯_RL^hh (f_R^h-f_L^h) ] ] dε + 1/2h∫_-∞^∞[ e ( 𝒯_RR^eh + 𝒯_RR^he ); (ε - μ_R ) 𝒯_RR^eh - (ε + μ_R ) 𝒯_RR^he ] ( f_R^e - f_R^h ) dε , where we defined the transmission functions 𝒯_ℓ_1 ℓ_2^τ_1 τ_2(ε) = 4 Γ_ℓ_1Γ_ℓ_2 G^r_ℓ_1τ_1, ℓ_2 τ_2 (ε) G^a_ℓ_2 τ_2 , ℓ_1 τ_1 (ε) . The special case for Γ_L = Γ_R = Γ_N is presented in Eq. (<ref>) of the main text. Using particle-hole symmetry, which states f^τ_ℓ(ε) = 1-f^τ̅_ℓ(-ε) and 𝒯_ℓ_1 ℓ_2^τ_1 τ_2(ε) = 𝒯_ℓ_1 ℓ_2^τ̅_1 τ̅_2(-ε) with τ̅ = e,h if τ = h,e, we can further simplify Eq. (<ref>) to obtain [ I_R; J_R ] = 1/h∫_-∞^∞[ e; ε-μ_R ][ 𝒯_RL^ee (f_R^e-f_L^e) + 𝒯_RL^eh (f_R^e-f_L^h) + 𝒯_RR^eh (f_R^e-f_R^h) ] dε . As explained in the main text, we consider a small voltage δ V and thermal bias δ T applied at the right normal-metal terminal, i.e.: e δ V ≪ k_B T and δ T ≪ T, respectively. Hence, we choose T_L = T, μ_L = 0, T_R = T + δ T, and μ_R = e δ V. By expanding the integrand in Eq. (<ref>) to linear order in δ T and δ V, we arrive at the linear Onsager relations presented in Eqs. (<ref>)-(<ref>) in the main text. §.§ IV. Complementary figures Figure <ref> shows the transition between the hole- and electron-dominated transport regimes by means of the EC transmission function 𝒯^ee_RL. In general, the crossover between these two regimes takes place whenever the gap in the s-wave case is closed at ϕ = π (upper row in Figure <ref>). This also happens at the same parameter set in the p-wave case, although the gap is always closed due to its topological protection and the presence of a Majorana zero mode (MZM) at ϕ = π. Note that the EC transmission function is symmetric in energy ε at the crossover point and does not contribute to the off-diagonal Onsager coefficients ℒ_12 and ℒ_21. For a symmetric choice of QD energies ε_L = ε_R [Fig. <ref>(a)], the crossover takes place at ε_avg = w. The same closing of the gap happens also in the asymmetric case ε_L ≠ε_R shown in Fig. <ref>(b), which, however, shifts the crossover point to a different value of ε_avg. Figure <ref> shows the complementary cases to Fig. <ref> that were briefly mentioned in the main text. In particular, in Figs. <ref>(a) and <ref>(b) we show the LDOS and EC transmission function for small asymmetric values of the quantum dot energies, ε_L ≠ε_R. Again, for Γ_CPS = Γ_S, we observe the constant resonant line in the EC transmission function which we also see in the symmetric case, although now the states are hybridized. Furthermore, as shown in Figs. <ref>(c) and <ref>(d), the constant resonant line disappears for weak CPS and a second pair of Andreev states emerges. §.§ V. Violation of the Wiedemann-Franz law Figure <ref> shows the violation of the Wiedemann-Franz (WF) law for different phase differences ϕ in the s- and p-wave case. In the s-wave case, shown in Fig. <ref>(a), this violation is always present at phases ϕ≠π and becomes largest at ϕ = 0. At ϕ = π, the WF law is satisfied due to the symmetric choice of all couplings to the superconductors, which results in vanishing CAR and LAR transmission functions. Furthermore, the qualitative behavior of the violation does not change between weak or strong CPS controlled by Γ_CPS. In contrast, the p-wave case [Fig. <ref>(b)] shows a violation of the WF law for all phases (except some isolated points where L = L_0), as long as Γ_CPS < Γ_S. In the strong CPS case Γ_CPS = Γ_S, the violation is appearing around the phase difference ϕ≈π, i.e., the region close to the MZM. §.§ VI. Linear response electrical and thermal conductance at low temperature The general elements of the Onsager matrix ℒ are given in Eq. (<ref>) in the main text. Their low-temperature behavior is obtained by means of a Sommerfeld expansion of ℒ_mn, resulting in <cit.> ℒ_11≈𝒯_11(0) , ℒ_12≈π^2/3 k_B T . ∂𝒯_12/∂ε|_ε=0 , ℒ_21≈π^2/3 k_B T . ∂𝒯_21/∂ε|_ε=0 , ℒ_22≈π^2/3𝒯_22(0) , where the general transmission functions 𝒯_mn are defined in Eq. (<ref>) in the main text. Note that, although 𝒯_21≠𝒯_12 due to local Andreev reflection, we still have ℒ_12 = ℒ_21 since the corresponding transmission function 𝒯_RR^eh is symmetric in energy and its derivative at zero energy is zero. From these equations, we get the low-temperature behavior of the electrical and thermal conductance as G/G_0 = ℒ_11≈𝒯_11(0), K/K_0 = 3/π^2 det(ℒ) /ℒ_11≈𝒯_22(0) - π^2/3 (k_B T)^2 /𝒯_11(0) . ∂𝒯_12/∂ε|_ε=0. ∂𝒯_21/∂ε|_ε=0 , resulting in K ≈ K_0 𝒯_22(0) at low temperature as long as the second term remains negligible. Since in general 𝒯_11≠𝒯_22 due to LAR, Eq. (<ref>) shows the violation of the WF law that is illustrated in Fig. <ref>. The low-temperature limit allows us to get analytical results for both conventional and topological Josephson junctions, since we only have to determine the retarded and advanced Green's functions at zero energy by replacing z = ± i η. Keeping in mind that η→ 0^+ in the end, we also perform an expansion for small η. To keep the analysis simple, we use Γ_S := Γ_T,L = Γ_T,R = Γ_B,L= Γ_B,R and focus on the two cases Γ_CPS = 0 and Γ_CPS = Γ_S for phases ϕ = π and ϕ≠π. For a topological Josephson junction, evaluating the transmission functions in the presence of a MZM requires special care, since the result changes drastically for ϕ = π. The transverse transmission functions become 𝒯_RL^ee(0) = {[ 𝒪(η^4) , ϕ≠π and Γ_CPS = 0; 𝒪(η^2) , ϕ = π and Γ_CPS = 0; ; 4 Γ_L Γ_R /(Γ_L + Γ_R)^2 + (ε_L + ε_R + 2 w)^2 + 𝒪(η) , ϕ≠π and Γ_CPS = Γ_S; ; Γ_L Γ_R ( Γ_L^2+( ε_L + w)^2 ) (Γ_R^2+(ε_R + w)^2 ) /[ Γ_L (Γ_R^2+(ε_R + w)^2) + Γ_R (Γ_L^2+(ε_L + w)^2) ]^2 + 𝒪(η) , ϕ = π and Γ_CPS = Γ_S ]. , 𝒯_RL^eh(0) = {[ 0 , ϕ = 0; 𝒪(η^6) , ϕ≠ 0,π and Γ_CPS = 0; 𝒪(η^2) , ϕ = π and Γ_CPS = 0; 𝒪(η^2) , ϕ≠ 0,π and Γ_CPS = Γ_S; ; Γ_L Γ_R (Γ_L^2+(ε_L+w)^2) (Γ_R^2+(ε_R+w)^2)/[ Γ_L (Γ_R^2+(ε_R+w)^2) +Γ_R (Γ_L^2+(ε_L+w)^2) ]^2 + 𝒪(η) , ϕ = π and Γ_CPS = Γ_S ]. , 𝒯_RR^eh(0) = {[ 0 , ϕ = 0; 𝒪(η^2) , ϕ≠ 0,π and Γ_CPS = 0; 1 + 𝒪(η) , ϕ = π and Γ_CPS = 0; 𝒪(η^2) , ϕ≠ 0,π and Γ_CPS = Γ_S; ; Γ_R^2 (Γ_L^2 +(ε_L+w)^2 )^2 /[ Γ_L (Γ_R^2 + (ε_R+w)^2) + Γ_R (Γ_L^2 + (ε_L+w)^2) ]^2 + 𝒪(η) , ϕ = π and Γ_CPS = Γ_S ]. , which results in the transmissions 𝒯_11(0) = {[ 𝒪(η^4) , ϕ = 0 and Γ_CPS = 0; 2 + 𝒪(η) , ϕ = π and Γ_CPS = 0; 𝒪(η^2) , ϕ≠ 0,π and Γ_CPS = 0; ; 4 Γ_L Γ_R/(Γ_L+Γ_R)^2+(ε_L+ε_R+2 w)^2 + 𝒪(η) , ϕ≠π and Γ_CPS = Γ_S; ; 2 Γ_R (Γ_L^2+(ε_L+w)^2)/Γ_L ( Γ_R^2 + (ε_R+w)^2 ) +Γ_R (Γ_L^2+(ε_L+w)^2) + 𝒪(η) , ϕ = π and Γ_CPS = Γ_S ]. , 𝒯_22(0) = {[ 𝒪(η^4) , ϕ = 0 and Γ_CPS = 0; 𝒪(η^2) , ϕ = π and Γ_CPS = 0; 𝒪(η^4) , ϕ≠ 0,π and Γ_CPS = 0; ; 4 Γ_L Γ_R/(Γ_L+Γ_R)^2+(ε_L+ε_R+2 w)^2 + 𝒪(η) , ϕ≠π and Γ_CPS = Γ_S; ; 2 Γ_L Γ_R (Γ_L^2+(ε_L+w)^2) (Γ_R^2+(ε_R+w)^2)/[ Γ_L (Γ_R^2+(ε_R+w)^2) +Γ_R (Γ_L^2+(ε_L+w)^2) ]^2 + 𝒪(η) , ϕ = π and Γ_CPS = Γ_S ]. . We see that the electrical conductance is negligible at ϕ≠π for Γ_CPS = 0 and it is quantized with G = 2G_0 at ϕ = π, similar to the result in Ref. <cit.> for a topological superconductor-normal metal junction. Furthermore, we see that G is completely determined by pure local Andreev reflection at the MZM. As presented in Fig. <ref>(a), this conductance quantization is very robust for a wide range of 0 ≤Γ_CPS≤Γ_S, with a suppression only for strong Γ_CPS→Γ_S. For Γ_CPS = Γ_S, the electrical conductance at ϕ≠π follows the standard Lorentzian result for electron cotunneling through a resonant level at the energy ε_L+ε_R+2 w <cit.>. Hence, it is completely dominated by electron cotunneling through the resonant level that is visible in Fig. <ref> in the main text. At ϕ = π, the general electrical conductance is a mixture of all tunneling processes. However, for the symmetric choice of parameters ε_L = ε_R and Γ_L = Γ_R, the conductance is G = G_0, which is also shown in Fig. <ref>(a). On the other hand, as shown in Fig. <ref>(b), the thermal conductance at ϕ = π is negligible at weak CPS. This is due to the absence of LAR processes in the thermal conductance. Similar to G at Γ_CPS = Γ_S, the precise value is not quantized and it is generally a mixture of EC and CAR contributions. However, for the symmetric choice of parameters ε_L = ε_R and Γ_L = Γ_R, the thermal conductance is K = K_0/2, which is also shown in Fig. <ref>(b). In contrast to the topological case, the transverse transmission functions for a conventional Josephson junction are smooth functions of the phase difference ϕ. We will not present the similar results for the s-wave case since the general results are (i) too cumbersome and (ii) not particularly insightful. However, it is worth noting that, similar to the topological Josephson junction at ϕ = 0, LAR and CAR are zero at ϕ = π for conventional Josephson junctions due to the symmetric choice of the couplings to the superconductors.
http://arxiv.org/abs/2306.02156v1
20230603165251
Effects of Imperfections on Quantum Algorithms: A Software Engineering Perspective
[ "Felix Greiwe", "Tom Krüger", "Wolfgang Mauerer" ]
cs.ET
[ "cs.ET" ]
IEEEexample:BSTcontrol [C] FirstPage 89 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Effects of Imperfections on Quantum Algorithms: A Software Engineering Perspective anonymousAnonymous author(s) Felix Greiwe Technical University of Applied Sciences Regensburg Regensburg, Germany mailto:[email protected]@oth-regensburg.de Tom Krüger Technical University of Applied Sciences Regensburg Regensburg, Germany mailto:[email protected]@oth-regensburg.de Wolfgang Mauerer Technical University of Applied Sciences Regensburg Siemens AG, Technology Regensburg/Munich, Germany mailto:[email protected]@othr.de July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Quantum computers promise considerable speedups over classical approaches, which has raised interest from many disciplines. Since any currently available implementations suffer from noise and imperfections, achieving concrete speedups for meaningful problem sizes remains a major challenge. Yet, imperfections and noise may remain present in quantum computing for a long while. Such limitations play no role in classical software computing, and software engineers are typically not well accustomed to considering such imperfections, albeit they substantially influence core properties of software and systems. In this paper, we show how to model imperfections with an approach tailored to (quantum) software engineers. We intuitively illustrate, using numerical simulations, how imperfections influence core properties of quantum algorithms on NISQ systems, and show possible options for tailoring future NISQ machines to improve system performance in a co-design approach. Our results are obtained from a software framework that we provide in form of an easy-to-use reproduction package. It does not require computer scientists to acquire deep physical knowledge on noise, yet provide tangible and intuitively accessible means of interpreting the influence of noise on common software quality and performance indicators. noisy quantum computing, NISQ systems, quantum software engineering, HW-SW co-design § INTRODUCTION FirstPageQuantum computing promises improvements and computational speedups over classical approaches for many tasks and problems, which include cryptography <cit.>, machine learning <cit.>, optimisation <cit.>, or simulating chemical and physical systems <cit.>. This has raised considerable interest across scientific communities, including (quantum) software engineering—programmable quantum computers and appliances will, eventually, involve software in one form or another. Given the current state of available noisy intermediate scale quantum (NISQ) hardware <cit.>, actual quantum advantages are rarely seen (except for specially crafted problems <cit.>). While error correction techniques for quantum computers exist, the required hardware resources exceed current system dimensions by many orders of magnitude <cit.>. Therefore, imperfections in quantum computers will be present in the forseeable future, and it is important for SW engineers and researchers to be aware how low level effects like noise influence software qualities. Given these conditions, evaluating, characterising and predicting functional and non-functional properties of quantum software is a complex, multi-facetted endeavour that requires catering for many details, many of them are unaccustomed from classical software engineering. It is becoming increasingly clear that possible performance benefits of quantum systems will be available only under particular circumstance that concern both, algorithms and hardware. Seemingly straightforward approaches (or naive analogies with classical systems and software) can quickly lead to bogus, unreliable or downright wrong statements that mis-characterise potential benefits of quantum approaches. Our paper is intended to help software engineers develop a realistic expectation regarding the performance of quantum algorithms under noise, on different types of hardware. We provide illustrative examples that show impacts on a number of seminal (classes of) algorithms—Grover search, quantum Fourier transform, and variational quantum circuits. Our main contributions are as follows: * We provide a self-contained exposition on modelling noise and imperfections tailored at computer scientists and software engineers to create a tangible bridge between fundamental physics and non-functional properties conventionally employed in software engineering. * Using numerical simulations, we show the influence of typical imperfections on multiple seminal algorithms for different hardware classes, and provide an intuitive understanding on potentials of hardware-software co-design for future quantum computing systems. * We provide a reproduction package <cit.> on the (link in PDF)[DOI-compliant version: <https://doi.org/10.5281/zenodo.8001512>] that allows software engineers to quickly evaluate how noise and imperfections influence their designs, without having to acquire a deeper understanding of low-level physical details. Our contribution intends to increase awareness in the quantum software community on the impact of noise and imperfections on algorithmic performance, but also on the opportunities of co-designing future (NISQ) systems whose properties are favourable for specific classes of applications. In contrast to existing work, our paper places stronger focus on providing self-contained instructions on how to understand and model imperfections, and how to judge their influence on key qualities or requirements of software and software architectures. This is, for instance, required to support a well-informed discussion on finding proper levels of abstraction needed to decouple peculiarities of QPUs as good as possible, yet should not stand in the way of utilising the computational power provided by QPUs. Likewise, knowledge of imperfections at a reasonable level of detail can help researchers to avoid placing inflated expectations on the capabilities of quantum approaches. The code in the reproduction package is based on the open source framework Qiskit <cit.>, and does not depend on any real quantum hardware or proprietary compilers, which makes it accessible to a wide audience. It not only enables researchers to easily re-create our results, but has, instead, especially been designed to enable researchers and software engineers to extend it with own algorithms (and test/benchmarking cases), and study them under the influence of various types of noise and gate sets, without having to manually implement the required physics-centric evaluation mechanisms. The remainder of this paper is structured as follows: After reviewing related work in quantum software engineering and quantum noise in Sec. <ref>, we discuss important characteristics of current QPUs, as well as particularly relevant open hardware challenges, in Sec. <ref>. We provide a gentle introduction to modelling noise and imperfections tailored towards software engineers in Section <ref>, and illustrate these considerations by discussing their impact on several seminal quantum algorithms in Sec. <ref>, followed by a discussion of the implications for software engineering in Sec. <ref>. We conclude in Sec. <ref>. § RELATED WORK Since quantum software engineering is in its initial stages (yet, Piattini  <cit.> go as far as to proclaim a new golden age of software engineering), the available literature still is sparse, and noise and imperfections are ignored in (or deemed irrelevant for) many expositions that concentrate on possible future higher-level abstractions to quantum software engineering and quantum programming. For instance, Perez-Castillo  <cit.> discuss how to extend the unified modeling language to quantum circuits. Similarly, Gemeinhardt  <cit.> suggest model-driven quantum software engineering as an abstraction that extends established SWE methods. Differences between quantum and classical engineering in terms of bug patterns are studied by Campos and Souto <cit.>, as well as Zhao <cit.>. Zhao <cit.> provides a detailed review of the available literature. Piattini  <cit.> suggest principles for the future development of quantum software engineering, and highlight hybrid algorithms and the desirable independence of specific quantum software frameworks. Leymann  <cit.> focus on often ignored aspects of imperfections in quantum computing. Structured approaches for benchmarking software on quantum computers are considered by Becker  <cit.> and Tomesh  <cit.>; in particular, the approach by Resch  <cit.> especially highlights the importance of choosing appropriate noise models. Salas  <cit.> consider noise effects on Grover's algorithm and state error thresholds. The performance of NISQ-era variational quantum algorithms, particularly in the QAOA family, has been subject to intensive research; recent results include Refs. <cit.>. Other application fields like machine learning (see, , Refs.<cit.>) have received similar consideration from an algorithmic benchmarking and performance analysis point of view. Interestingly, it is known that noise need not necessarily be detrimental, but can also contribute improvements, as recent research (, Refs. <cit.>) demonstrates. The physics-centric literature on quantum noise is extensive, and reaches considerably further back; the seminal exposition by Gardiner and Zoller <cit.> contains many of the fundamental results. Noise and imperfections in all possible implementation platforms for quantum computers from a physical point of view have likewise been considered in substantial depth and breadth, for which Bharti  <cit.> provide a review. Characterising the capabilities of quantum computers is, in general, an active field of research: Considerations based on cross entropy <cit.> and quantum volume <cit.> consider properties of random circuits, and aim at a generically usable comparison metric that is applicable across implementation techniques, but does not allow for deriving concrete statements on algorithms or use-case scenarios. Application oriented benchmarks (, Refs. <cit.>), and other domain-specific (, <cit.>) or generic (, Refs. <cit.>) approaches, consider more concrete perspectives, but often use techniques that are unaccustomed for software engineers. We aim, in contrast, at a correct, yet tangible and algorithm-oriented approach that is accessible and useful for the software engineering community. § QUANTUM HARDWARE AND HW CHALLENGES One major challenge in quantum computing is to provide an isolation between the fragile quantum bits that carry quantum information, and are used to perform computations on, and the surrounding environment. Interactions between qubits and the environment lead to the loss of quantum information (decoherence), and therefore degrade the quality of computational processes. Likewise, operations on one or more of the qubits that perform the actual computation may be imperfect, and usually implement a transformation that is only close to the actual specification, including random variations. Both aspects do not occur in classical systems (or can be very well countered), and correspondingly, software engineers (outside, probably, highly specialised domains like safety-critical engineering) need not be concerned with the corresponding phenomena. It is still unclear which basic physical concepts will provide the basis of future quantum computers. A multitude of possible approaches are currently developed and investigated, including systems based on trapped ions, neutral atoms, superconducting semiconductor-based implementations, or photonic systems. We chose two common architectures of commercial interest to highlight the essential, far-reaching differences in their physical implementation that, as we will argue in this paper based on numerical simulations, substantially impact many properties of systems that are relevant to software engineering. §.§ Physical Foundations §.§.§ Trapped Ions By using an electromagnetic field to hold ions together in a trap, they serve as building blocks to realise qubits by using stable (internal) electronic states of the ions together with so-called (external) collective quantised motion states of all ions assembled in the trap. Laser pulses are used to control and couple the internal and motion states, realise single- and multi-qubit gates, and cool (slow down) the ions to motional lowest-energy states (see. , Ref. <cit.> for details). A salient characteristic of the motional coupling that affects all quantum bits is that two-qubit operations can be executed between any two qubits, which means that the system provides a fully-meshed coupling graph. Likewise, very high gate fidelities and coherence times can be achieved, as summarised in <ref>. Despite laser cooling of the involved ions, the overall system operates at room temperature <cit.>. In contrast to these advantages, gate execution times are comparatively large; operations require microseconds of processing time. Additionally, it is not straightforward to scale trapped ion systems to more than, say, 100 qubits, while maintaining the motional coupling between ions. Noise in gate application arises from variations of intensities and phases of of the lasers involved, but also from external electromagnetic fields that cannot be completely shielded off. §.§.§ Superconducting Transmons Superconducting quantum computing exploits quantum mechanical properties of macroscopic structures that stem from Cooper electron pairs that form at very low temperatures in superconductors. The need for such low temperatures is a disadvantage compared to trapped ion systems. Also qubits are coupled to qubits in their direct neighbourhood, and achievable coherence times are orders of magnitudes lower. However, gate times in superconducting devices are in the nanosecond range, and larger systems (in terms of qubits count) can be built compared to trapped ion system. Finally, the manufacturing process can benefit from established industrial semiconductor know-how. §.§ Challenges Several limitations of current quantum computers extend across physical realisations and need to be considered when designing quantum software components, or when planning experiments to judge feasibility or scalability of proposed quantum architectures. Some of the limitations are specific to NISQ systems, others concern intrinsic limitations of quantum computers and algorithms that need to be taken into account for any consideration relating to quantum software engineering or architecture. Noise (, effects of imperfect quantum information representation and manipulation), limited connectivity between qubits, and gate timing characteristics offer substantial potential for future engineering improvements; they can be seen as hardware parameters to a certain extent from a SWE point of view. We study the respective potentials in detail the next section. §.§.§ Noise Quantum states are fragile, and operations on such states require involved physical manipulation techniques that are hard to implement perfectly—any real-world implementation slightly deviates from a theoretically desired perfect operation. Likewise, information in quantum states is perturbed by interaction with an outside environment, which is unavoidable because of the need to interact with and manipulate the states to perform computations. The (in)stability of quantum states and quantum operations is characterised by established measures that we discuss below; representative measures for three different commercially accessible platforms are shown in Table <ref>. The coherence times T_1 and T_2 indicate how resilient the information stored in qubits is against perturbations (longer times are better). T_1 gives the average time it takes a qubit to relax from |1⟩ to state |0⟩ (bit flip). The stability of the relative phase in a superposition state |+⟩ = 1/√(2) (|0⟩+|1⟩) is quantified by T_2, providing the average time after which |+⟩ has evolved into an equal-probability classical mixture of |+⟩ and |-⟩ = 1/√(2)(|0⟩-|1⟩) (phase flip). The quantities e_1 = 1- F_1 and e_2 = 1 - F_2 describe error rates for one- and two qubit gates, and relate to the average gate fidelities F_1/2, which measures gate quality (an exact definition follows later). Similarly, TG_1 and TG_2 denote average gate times of one and two qubit gates, whereas n specifies the number of available qubits, and C is coupling density (, the average fraction of degree of connections between qubits; 100% for a fully meshed graph that represents physical all-to-all connectivity). The systems characterised in Table <ref>, albeit they only represent a fraction of the current variation in implementation technologies, [It would have been desirable to include additional vendors and approaches in our simulations. Yet at the time of writing, public availability of the corresponding low-level data is scarce, and many vendors are reluctant to publish specific values. While we rely on vendor-reported error rates for the available data, it needs to be kept in mind that details of how these numbers were obtained are not always clearly specified. Since (commercial) vendors might be interested in a favourable representation of their products, any simulations based on these numbers should be used as indicators, not as absolute and scientifically verified performance measures.] exhibit widely varying characteristics that are not straightforward to translate into established quality, performance, or scalability indicators, as they are typically considered in software engineering. Therefore, empirical characterisation and a generic understanding of the impact of imperfections on software qualities, as we address it in this paper, seems indispensable. We obtained the low level metrics for system from so called FakeBackends which are embedded in Qiskit and are based on snapshots of their systems. For the special case of the Z and Rz gate, we set the respective gate time and error in our noise model to zero, since these gates are implemented virtually by all represented vendors. For circuit depth estimation, these gates are not considered either. For the low level metrics were taken from <cit.>. Since the data for T_1 coherence has a wide range we decided on a value of 50s in our simulations for . Vendor provides error data obtained via randomised benchmarking as an online resource <cit.>. and , to the best of our knowledge, do not state exactly whether their Rz gates are considered in calculation of average error rates for one qubit gates. Thus, the average error rates might be slight overestimations when compared to . This is negligible since the two qubit gates introduce errors with higher impact. <ref> shows error rates for quantum gates on different contemporary hardware approaches. All architectures are affected by noise, which limits the achievable depth of quantum circuits, and thus the computational power. Yet, there is no common noise pattern across systems, which makes most statements about performance and behaviour of quantum algorithms impossible without taking very specific hardware details into consideration, in stark contrast to classical machines. As mentioned in <ref>, for quantum error correction to work the requirement for thousands of qubits <cit.> arises which will most likely not be possible on a larger scale in the near future. §.§.§ Connectivity Logical quantum algorithms usually make arbitrary (pairs of) qubits interact when multi-qubit operations are applied. Most physical implementations of quantum systems place restrictions on the possible interactions between pairs of qubits. One important step in translating a logical into a physical quantum circuit is to (a) map interacting logical qubits to interconnected physical qubits, and (b) ensure that operations performed on unconnected qubits (if placement alone cannot guarantee this) are enabled by moving qubits into proximity prior to gate execution. While qubits cannot (for most implementation technologies) be moved physically, applying allows us to change to logical state between two connected qubits. This allows, at the expense of increasing circuit depths, to bring two unconnected qubits into connected positions, and then apply a joint gate operation. For architectures that do also not natively support logical swap gates, a replacement can be provided by three X gates, at the expense of an even larger increase in circuit depth. <ref> compares topologies for some of the major available quantum architectures. We illustrate their influence on algorithms in Section <ref>. §.§.§ Gate Sets The set of elementary quantum gates varies considerably with implementation technology. Universal quantum computation can be achieved with many different choices. While the theoretical capabilities of each set are identical, the practical behaviour of gates under the influence of noise may vary distinctly. Executing identical algorithms on different hardware does therefore not only influence computation times (as is familiar from classical computing), but is also affected by different influence of noise. <ref> illustrates elementary gate sets for the subject architectures. §.§ Subject Algorithms We have chosen three canonical, yet substantially different algorithms to study the impact of noise and imperfections: Grover search, quantum Fourier transform, and variational quantum circuits. It is possible to prove speedups over their classical counterparts for the first two algorithms, albeit these only materialise for perfect, error-corrected quantum systems. Their scalability in terms of circuit depth growth with increasing input size is distinctly different. The third class, variational quantum circuits, is speculated to exhibit speedups over classical approaches under some credible assumptions, and are particularly well suited for NISQ hardware and empirical experiments, as they allow for very shallow circuits. Yet, practically relevant speedups have still failed to materialise on a wider front in current systems. §.§.§ Grover Search Grover's algorithms allows for finding specific elements in an unstructured search space. Simply put, the algorithm iteratively repeats two sub-circuits: An oracle to mark the desired element in a search space, and a rotation in a two dimensional plane. For inputs of n qubits, the required number of iterations scales with 𝒪 (√(2^n)) <cit.>, which provides a quadratic speedup compared to the best classical search algorithms—yet, this speedup is relative to exponential growth. Since the algorithm matches a wide class of application problems, it can seem tantalizing to seek free quantum improvements by deploying the algorithm as drop-in search replacement in existing scenarios. However, there are some pitfalls to consider: Grover search does usually not, despite common perception, query an actual physical database encoded in quantum states, [While it would be possible in principle to apply Grover search on top of quantum random access memory (QRAM), this would result in a quadratic speedup for a search task on an exponentially growing search space, which is usually irrelevant industrial settings. Other data loading alternatives exhibit similar difficulties. Approximate encoding techniques <cit.>, together with shallow variants of Grover <cit.>, or improvements in the amplitude amplification process <cit.> might lead to fruition, but underline that judging non-functional software characteristics is impossible without accounting for technical and physical details that can be ignored in classical approaches. Given that QRAM is invariably harder to manufacture than classical RAM, a scenario where the former can fully replace the latter seems hardly credible.] but evaluates an efficiently computable function f that acts as predicate to identify one or more optimal elements in a search space. This implies costs (especially in terms of circuit depth) to implement this target function using quantum operators, which may be non-trivial <cit.>. Many practical applications are either interested in average case complexity, or enjoy some structure in their search space that can be (also heuristically) used to speed up processing, which places considerable limitations on practical utility. Replacing classical primitives with Grover search is a commonly used pattern (in database research <cit.>) in efforts to utilise quantum computing, and is sometimes backed up by evaluations on small-scale NISQ machine. We study the limited utility of this approach in Sec. <ref>. By determining the probability of measuring the desired element of the search space as output, we can associate a success probability with runs of Grover search. Our implementation is inspired by  <cit.>. §.§.§ Quantum Fourier Transform The quantum Fourier transform is as a building block for many quantum algorithms, most notably Shor's factoring algorithm. It is a computational analogue of the (discrete) classical Fourier transformation, albeit there are pronounced differences in obtaining the results, as the probability of reading a Fourier coefficient is related to its magnitude, and applications that require access to the full transformation do not benefit from quantum advantage. Yet, QFT requires exponentially fewer operations than FFT. §.§.§ Variational Quantum Circuits The algorithmic family of variational quantum algorithms comprises circuits that contain gates whose properties are controlled by a tunable parameter. After feeding an input state through a circuit and measuring the output, the parameter settings are adjusted in a training process—not unlike classical machine learning—, and the process is repeated, until some desired target function is approximated with sufficient quality. The approach is versatile and well suited for experimentation on NISQ machines, particularly because of controllable circuit depth. An example for a variational circuit (as we employ it in the below experiments) is shown in Fig. <ref>. For data encoding, which comprises the left part of our circuit, we follow the example of Ref. <cit.>, where the given encoding proves to be well suited for function approximation. The parameterized part of our quantum circuit is one of several building blocks for variational quantum circuits, the authors of  <cit.> investigate in their work. While the variational part of our circuit could be repeated several times, we stick to one single parameterized layer in our experiments. For our simulations we train the variational circuit to mimic the behaviour of the function f(x)=x^2, extending an example of <cit.> for different noise levels. We will now quickly summarize the procedure: We call the parameters of the circuit θ_i ∈θ, all of which start at θ_i=0 as initial parameter values. For simplicity we choose to measure only the qubit |q_1⟩ in the Z basis and subsequently extract the expectation value of the observable ⟨ M_j ⟩_θ∈ [-1,1] for the training input x_j and the current parameters. For sample based approaches the approximation of ⟨ M_j ⟩_θ requires a reasonable choice of circuit estimations, that is, the sample number. Based on the quadratic loss function for iteration k, L(x_j,θ^(k)) = 1/2(⟨ M_j ⟩_θ^(k) - f(x_j))^2, we update θ in every iteration using gradient descent θ^(k+1) = θ^(k) - η∂ L/∂θ. We calculate the derivative of the loss function using the parameter shift rule <cit.> ∂ L/∂θ_i = 1/2(⟨ M_j ⟩_θ^(k) - f(x_j)) ( ⟨ M_j ⟩_θ^(k)+e⃗_i π/2 - ⟨ M ⟩_θ^(k)-e⃗_i π/2), where e⃗_i is the a unit vector for component i. After carrying out the partial gradient calculation for every parameter, the accumulated gradient ∂ L/∂θ is used in the parameter update. Our simulations fix the number of training iterations at 100, and take uniformly spaced samples x_j ∈ [-1,1], which are permuted with a (fixed) random seed. Results are therefore comparable between architectures and noise variants. § MODELLING NOISE AND IMPERFECTIONS In the following, we outline the theoretical concepts necessary to describe and understand how noise (, the influence of uncontrollable external factors) impacts quantum calculations. We aim at an exposition that is accessible to software engineers without deeper involvement in quantum physics, yet sufficiently accurate to paint a realistic picture that allows for drawing reliable conclusions. §.§ Mixed States and Density Operators The notion of quantum states in software engineering usually refers to pure states |ϕ⟩∈C^2^n, that is, states represented by a unit vector in a 2^n dimensional complex vector (Hilbert) space. Sometimes, however, it is not possible to obtain full knowledge of the state of a system. Consider, for example, the case where we take an (educated) guess whether some external influence that is not under our control (in other words, noise) flipped qubit x_i or not. In this scenario, the system is in state |x_i⟩ with probability p or in state | x_i⟩ with probability 1-p. Generalising this concept delivers a probability distribution ([ |ϕ_1⟩ |ϕ_2⟩ …; p_1 p_2 … ]) of different possible system states. If a system follows such a distribution it is said to be in a mixed state. Mixed states contain two probabilistic aspects: (a) Stochastic outcomes resulting from measurements have their origin in the very properties of quantum theory. (b) Purely classical uncertainty about the state that arises from a lack of knowledge of external confounding factors (noise). When influences beyond our active control modify a quantum state, we need to express the (classical) uncertainty arising from the scenario with classical probabilities p_i. One convenient way to express mixed states is the density matrix formalism <cit.>, which describes a collection of quantum states |ϕ_i⟩ mixed up with classical probabilities p_i as ρ = ∑_i p_i |ϕ_i⟩⟨ϕ_i|, where ⟨ϕ| denotes, for finite-dimensional systems, the conjugate transpose of |ϕ⟩, turning the object |ϕ⟩⟨ϕ| into a matrix. Application of a unitary operator U, which represents a computational step, to a mixed state ρ is described by ρ→ U ρ U^†. In the following, transformations of the density matrix will be used to describe evolution of quantum states suspect to noise. §.§ Dynamics of Noisy Quantum Programs The evolution of a closed quantum system is described by unitary operations. A noisy system on the contrary is open to an external environment (source of noise). The trick to model open noisy systems is to include the environment, such that one ends up with a bigger but closed quantum system. Let ρ be the state of an open quantum system of interest, which we will call the principal system. Now, we combine ρ with the state of its environment ρ_env. The new system ρ⊗ρ_env is closed and its evolution U (ρ⊗ρ_env) U^† can be described by a unitary operator U. Tracing out the environment reveals the evolution of ρ under noise: ℰ(ρ) = _env(U (ρ⊗ρ_env) U^†). The partial trace is a tool in the density matrix formalism to discard certain parts of a quantum mechanical systems, for more information we refer to <cit.>. Note that the quantum operator ℰ(ρ) is not necessarily unitary anymore. Let ℬ_e = {|e_k⟩}_k be a basis of the environment. Now, if the environment is measured in ℬ_e after the time evolution, then the outcome determines the state of the principal system. We end up with a random distribution of states for the principal system depending on the measurement. The effect the environment had on ρ when the outcome k occurred can be described by an operator E_k <cit.>, leading to a mixed state description ρ↦∑_i E_i ρ E_i^†,  ∑_i E_i^† E_i = 1 §.§ Fidelity In this work we mainly focus on the effect of noise when performing computations, that is gate errors. Consider a quantum operation ℰ(ρ) describing the noise impact on ρ. Under the influence of noise, a pure state ρ = |ψ⟩⟨$| evolves toℰ(ρ) = ∑_i p_i |ψ_i⟩⟨$|. We can now calculate ℰ(ρ)ψ = ∑_i p_i ⟨ψ|ψ_i⟩⟨ψ_i|ψ⟩ = ∑_i p_i ⟨ψ|ψ_i⟩^2, which measures the overlap between |ψ⟩ and ℰ(ρ). This can be seen as a measure of how much information is preserved under noise. In a perfect noiseless environment ℰ(ρ) = |ψ⟩⟨ψ| = ρ, preserving all the information, respectively ℰ(ρ)ψ = ⟨ψ|ψ⟩^2 = 1. On the other hand, the more ℰ(ρ) turns |ψ⟩ in the direction of an orthogonal state |ψ^⟩ the more information gets lost and ℰ(ρ)ψ approaches 0, as ℰ(ρ) goes to |ψ^⟩⟨|. The measure F=ℰ(ρ)ψ is commonly denoted as the fidelity;[Different definitions of fidelity are given in the literature; as the general characteristics are identical, we only consider one variant in this paper..] it is useful to judge how much the result of a noisy quantum computer deviates form the result of a perfect machine. The deviation of imperfect quantum gates (or other components) from a perfect implementation is not directly captured by fidelity: Gates do not operate on a fixed input state. Their degree of deviation from a perfect gate is state-dependent, so determining gate fidelity for a single state is insufficient to characterise quality. Instead, the average fidelity describes the mean over individual gate fidelities for all quantum states.[How to compute the average of a desired quantity over all possible quantum states of a system is an interesting problem on its own that we cannot discuss in detail; see, for instance, Ref. <cit.> for more information.] The values shown in <ref> represent average fidelities. §.§ Noise Models Having introduced the general modelling principles for noisy quantum systems, we can now describe specific types of noise that represent typical physical imperfections. §.§.§ Bit Flips A probabilistic qubit flip <cit.> is given by ρ↦ (1-p) 1ρ1^† + p X ρ X^†. The operation applies the Pauli X gate (bit flip) to the one qubit system ρ with probability p, and else leaves the state as is. Note how the above equation is one way to choose the operators E_k in (<ref>). Similarly, the phase flip error (Pauli Z gate) and the bit-phase flip error (Pauli Y gate) can be constructed by replacing X by Z or Y in Eq. (<ref>). §.§.§ Depolarisation One commonly used error in simulation is the completely depolarising operator on one qubit which randomly applies the Pauli operators X,Y,Z<cit.> ρ ↦ (1-p) 1ρ1^† + p1/4(1ρ1^† + Xρ X^†+Yρ Y^†+Zρ Z^†), with a certain probability, and else leaves the qubit as is. A quick calculation reveals that (<ref>) equals ρ↦ (1 - p) ρ + p 1/21, where 1/21 is the density representing the state of a system being in every basis state with equal probability. Hence, the system either stays intact or all information gets destroyed with probability p. For a n qubit system we get<cit.>: ρ↦(1-p)ρ + p 1/2^n1 §.§.§ Thermal Relaxation The thermal-relaxation error models the decoherence of quantum system over time. The derivation of the associated quantum operator is significantly less straight forward, so we refer to Refs. <cit.> for a derivation. §.§.§ Hardware-Matched Composite Noise We close by incorporating hardware metrics into a noise model (limited to stochastic gate noise, and ignoring measurements, state preparation, idle noise and coherent error models). We construct three noise models from the data in <ref>.[Our approach is a simplified version of the integrated error models for FakeBackends provided by Qiskit <cit.>.] The idea is to use a composite error consisting of thermal relaxation depolarisation for every gate, such that average gate fidelities of the real hardware match the model <cit.>. For our simplified version we only distinguish between one and two qubit gates, but do not introduce per-gate errors, or errors depending on individual qubit quality. While this renders the model less accurate than, for instance, FakeBackends, it allows us to apply it to hardware for which detailed quality data are not publicly available. The construction goes as follows: We define an error operator ℰ = ℰ_D ∘ℰ_R, combining both the depolarising error ℰ_D(<ref>) and the thermal relaxation error ℰ_R. The fidelity of ℰ_D is given by <cit.> F_D = 1 - p(1-2^-n), and the fidelity for thermal relaxation channel ℰ_R can be calculated using its parameters, namely T_1, T_2 and the gate time T_G. We tune our model to match target fidelities F_targ found in the literature. Using (<ref>), the composition of depolarizing and thermal relaxation channel is ℰ_D∘ℰ_R = (1-p) ℰ_R(ρ) + p 1/2^n1. The fidelity denotes the overlap with a pure reference state |ψ⟩, F(ℰ_D∘ℰ_R) =(1-p)ℰ_R(ρ)+p1/2^nψ =(1-p) ℰ_R(ρ)ψ + p 1/2^nψ, where we use the linearity of the inner product in the second argument. The left hand side of Eq. (<ref>) is (1-p)F(ℰ_R), while the right hand side is p F(ℰ_D|_p=1) =p 2^-n, which corresponds to the fidelity of the depolarising channel evaluated at p=1. We can express the target fidelity dependent on p, and solve F_arg = F(ℰ_D ∘ℰ_R) =(1-p)F_R + p 2^-n ⇔ p = F_R-F_targ/F_R- 2^-n. This ensures the composition matches the vendor target fidelity F_targ given in <ref>. §.§ Measurement and Sampling It is textbook knowledge that measuring quantum states results in probabilistic outcomes; a natural question that needs to be addressed to characterise algorithmic performance is how many samples are required to achieve acceptable trust in outcomes. Given an error margin ϵ that we are willing to accept, and a desired confidence δ for the sampled probabilities to fall within this margin of error, a lower bound on the required number of samples s can be determined by invoking the variant s ≥1/2ϵ^2log(2/δ) of the Höffding inequality (see, , <cit.>), to arrive at meaningful statements. Note that the numerical experiments considered in Sec. <ref> simulate the complete density matrix, from which exact measurement statistics can be deterministically extracted. Therefore, no sampling noise as it would arise for real hardware is contained in the plots. Accessing the density matrix on real hardware is possible, but requires an (experimentally involved) quantum state tomography that measures a complete set of observables whose expectation values determine the density operator (a number of measurements exponential in the system size is required, albeit less costly approximations are possible <cit.>). Consequently, sampling is unavoidable to characterise quantum algorithms on real systems, and the above considerations guide software engineers on what temporal overheads to expect. § NUMERICAL SIMULATIONS We now commence with illustrating the concrete effects of the various modes of imperfection on the subject algorithms, and show how they crucially affect many algorithmic properties that are directly relevant for software engineering. We deliberately base our considerations on two seminal, very well understood algorithms despite their known non-usefulness on NISQ hardware, as it allows us to focus on the influence of noise instead of having to consider peculiarities of the subject algorithms. This is important to approach the topic from a tangible, concrete software engineering perspective that links the influence of noise with what computer scientists are well acquainted with: The performance analysis of algorithms. <ref> summarises the simulation procedure as implemented in the reproduction package (available on the ). First, a quantum circuit is generated, together with a noise model characterised by type of noise and strength, as specified by the input parameters. Then, the logical circuit is translated into a physical representation for one of the supported hardware platforms, which comprises a user-customisable topology and gate set, and does not necessarily correspond to a real physical platform, albeit we focus on the platforms characterised in <ref>. This allows users to consider tailored designs that match the requirements of problems of interest, and can guide (co-)development of quantum hardware. The reproduction package extends standard means provided by Qiskit with gate sets for and , as well as methods for mapping base circuits onto these.[Our results are obtained from the Qiskit density matrix backend, which provides complete and accurate results, and is not restricted with respect to simulating noise. We use a standard gradient descent optimiser to iteratively improve parameters for variational algorithms.] §.§ Noise and Scalability We start by considering scalability with respect to input size and noise strength. While it is well known that Grover Search and QFT can not be realistically deployed on NISQ machines, it is instructive to see how detrimental effects of noise on their performance are. <ref> simulates success probabilities using the inherent native machine noise for both algorithms. For all hardware architectures, the success probability quickly drops; Grover search does not produce valid solutions for more than six qubits on any architecture. This clearly indicates that experimental evaluations that are intended to show the practical functionality Grover-based approaches carry little merit; the principal functioning of Grover's algorithm is very well understood, but it is hardly possible to extrapolate any meaningful statements from such measurements. It is also instructive to consider how the slight differences in fidelity (recall <ref>) lead to comparatively large differences in success probabilities for the different vendor platforms. As <ref> shows, success probability and state fidelity are essentially identical. While it is possible to define a successful target state in the computational basis for Grover and QFT, this does not necessarily hold for other approaches, and does also not extend to approximation cases where closeness to an ideal state is sought. Fidelity (as a continuous measure of closeness) is a useful replacement for success probability. The inset in the top part of <ref> illustrates the increase in circuit depth with increasing input size; <ref> further below provides the same information for QFT (for now, only consider the elements for native connectivity). Circuit depths for Grover increase exponentially, and exceed values of 1000 for more than 6 qubits, resulting in zero success probability. The increase for QFT is more relaxed, with depths of around 150 for all architectures at 11 qubits of input. The differences in success probability highlight circuit depth as key performance (and feasibility) indicator, which should therefore be at the core interest of software engineers. §.§ Noise Variants The previous examples are based on noise (inspired by the characteristics of real hardware), which combines effects of different physical processes, as we have outlined above. We now evaluate individual contributions of elementary noise types that may provide guidance in designing future quantum systems. We apply these to a shallow variational quantum circuit, which is shown in <ref>. Parameters are trained using the procedure described in Sec. <ref>.[Note that the choice of hyper-parameters (, step size), as well as the classical optimiser influence algorithmic performance. While different hyper-parameters might be beneficial for different noise levels or noise methods, we did not consider such an optimisation in the scope of this paper.] The training loss for Pauli-X, Y, and Z, as well as a depolarising channel, is shown in <ref> for different levels of noise over 100 training iterations. The loss converges to zero once parameters have been satisfactorily learned; the simulations show that increasing noise strength impedes this process differently depending on the noise type. While bit flips (Pauli-X) are particularly obstructive, platform like trapped ion systems are particularly resilient against this type of noise <cit.>, which might be a relevant criterion for choosing an underlying quantum platform for a software system. The right hand side of <ref> illustrates how predictions obtained from the trained models degrade with increasing amounts of noise, providing tangible means of judging the effects of different types and strengths of noise. Both influence result quality in different ways, and software engineers will need to decide (by choice of hardware) which variants are best tolerable for particular use-cases considering domain knowledge and requirements. §.§ Connectivity and Gate-Sets Finally, let us elaborate on effects of connectivity structure and gate sets under the influence of noise. For given architectures, the combination is fixed, but co-designing systems that optimise either for specific algorithms is seen as one possible path towards quantum advantage <cit.>. Software engineers should be aware of possible future design opportunities. The top part of <ref> fixes the gate sets for each vendor, but shows circuit depth scaling for both, the native connectivity structure and a fully connected architecture (hypothetical for and , standard for ). Owing to the need to insert swap gates for and , depth grows super-linearly with native connectivity, but increases linearly with a full mesh, substantially reducing circuit depth.[Manufacturing a fully meshed connectivity graph is extremely challenging for semiconductor-based approaches. However, as Refs. <cit.> show, even small additions to existing connectivity structures can result in major improvements in circuit depth.] For , circuit depth increases quicker than for the other architectures. This can be attributed (based on manual inspection of the generated circuits) to weaknesses of the circuit translator that maps logical to physical circuits, [Native compilers might improve results, yet do not satisfy our goal of providing open and reproducible means of obtaining simulation results.] which stresses the importance for software engineers to place greater emphasis on low-level details like compiler performance that is only of marginal interest for many classical SE tasks. The bottom part of <ref> illustrates some additional effects: A fully coupled connectivity structure combined with fixed per-gate error rates for all vendors isolates the effects of vendor specific base gates, and in particular, of compiling to them. Here, the performance of base gates is mostly due to our sub-optimal transpiler, as the resulting larger circuit depth gives more opportunity to pick up noise. Software engineers must be aware of a possibly complex interplay of factors when evaluating algorithmic quantum performance. § IMPLICATIONS FOR SOFTWARE ENGINEERING We have illustrated how noise and imperfections impact NISQ performance, and that a certain amount of knowledge of the underlying mechanisms is necessary for proper interpretation. From the (quantum) software engineering point of view, imperfections influence if and how non-functional requirements can be satisfied. In particular, they affect scalability, performance, testability, and cost. The relation to the first two qualities has already been intensively discussed above. Testing outcomes of quantum algorithms needs to deal with two aspects of uncertainty: Measurements leading to stochastic outcome distributions, and imperfections in gates and components. While probabilistic behaviour is well established in classical computing <cit.>, physical imperfections have found little consideration in SW testing to the best of our knowledge. To appropriately design noise-aware tests and judge test results, software engineers need to be able to understand quantum noise at a sufficient level of detail. The impact of noise and imperfections on quantum software could be ignored if perfect, large-scale QPUs with error correction were available. No physical reasons prevent designing the required systems, but many engineering challenges make specifying concrete roadmaps towards this goal challenging. Even if such systems can eventually be built, practical industrial applications will not just be judged by performance considerations, but by their overall cost-benefit trade-off, which is among core concern of any engineering discipline. Consequently, since imperfect error correction and error mitigation schemes <cit.> will likely result in less costly machines, it seems reasonable to assume that such machines will co-exist with perfect quantum computers, given they can solve certain tasks advantageously over classical computers. For instance, Liu  <cit.> prove exponential speedups for certain types of quantum machine learning on fault-tolerant machines, which are believed to be extensible to NISQ machines using error-mitigation (Hubregtsen  <cit.> study training embedding kernels on NISQ machines). Some properties of quantum states and circuits require explicit consideration in designing new and extending existing test methodology: Not just the stochastic nature of quantum measurements, but also the impact of imperfections makes defining desirable test results hard, as it is necessary to distinguish these measurement-induced variations from variations induced by noise and imperfections. Guidelines that eliminate the need for individual software engineers to be aware of statistical peculiarities could be established. § CONCLUSION Using a reproducible and extensible empirical simulation approach, we illustrated how noise and imperfections affect the the properties of quantum algorithms on existing and hypothetical NISQ hardware. A solid understanding of such effects is useful not only for researchers and engineers working on hardware implementations, but also for software engineers. Yet, it seems unreasonable to equip every software engineer or SWE researcher with detailed physical knowledge on noise. We instead provide a suitably detailed introduction to the topic, accompanied by an easy-to-use replication package that allows software engineers to explore the influence of noise with little effort. We deem this a crucial aspect in the endeavour of realising future quantum applications of practical benefits. § ACKNOWLEDGEMENTS This work is supported by the German Federal Ministry of Education and Research within the funding program Quantentechnologien – von den Grundlagen zum Markt, contract number 13NI6092. 61IEEEtran
http://arxiv.org/abs/2306.03142v1
20230605180007
Critical Matter
[ "Leo Radzihovsky" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "hep-th" ]
#1#1
http://arxiv.org/abs/2306.11603v1
20230620153024
Semi-infinite construction of one-dimensional lattice vertex superalgebras
[ "Timur Kenzhaev" ]
math-ph
[ "math-ph", "math.MP" ]
acm same calc, matrix, arrows,decorations.pathmorphing, positioning equationsection 1.6in -0.8in 1.6in -0.8in definition theoremTheorem[section] lemmaLemma[section] propProposition[section] definitionDefinition[section] remarkRemark[section] exampleExample[section] consequenceConsequence[section]
http://arxiv.org/abs/2306.09068v1
20230615115547
Behaviorally Typed State Machines in TypeScript for Heterogeneous Swarms
[ "Roland Kuhn", "Alan Darmasaputra" ]
cs.DC
[ "cs.DC" ]
A heterogeneous swarm system is a distributed system where participants come and go, communication topology may change at any time, data replication is asynchronous and partial, and local agents behave differently between nodes. These systems are hard to design and reason about, mainly because we desire a particular class of behaviors to emerge from the interplay of heterogeneous individual agents. Nevertheless, mission-critical operations like manufacturing process orchestration in factories use such systems due to their uncompromising availability and resilience of computing services. This paper presents a set of TypeScript libraries to model peer-to-peer workflows as state machines, execute them using the Actyx middleware, and check the shape of these machines for conformance to a swarm protocol. The swarm protocol describes an idealized global view of the cooperation of machines of different roles. It directly corresponds to a diagram a product manager would sketch on a whiteboard; this allows for verifying that the coded state machines correctly implement the product specification. A well-formed swarm protocol also guarantees that conforming machines will achieve eventual consensus on the overall state progression even in the absence of further coordination. This tool is for developers of business logic for heterogeneous swarm systems, helping them verify that their protocols and implementations are correct. Tool repo: <https://github.com/Actyx/machines> Competitive effects between gravitational radiation and mass variation for two-body systems in circular orbits Cyril Renevey July 31, 2023 ============================================================================================================== § INTRODUCTION Picture an automated factory shop floor: autonomous vehicles are buzzing to move materials, parts, and finished goods between machines and smart warehouses. Participants of various kinds come and go dynamically and collaborate locally without centralized planning and oversight. All communication is local and topology changes over time; groups form and dissolves around collaborative workflows; interruptions of participants or communication links are frequent, brief, and local—the rest of the swarm continues to function; participants may fail-stop or be newly introduced. This setting is a heterogeneous swarm, similar to local-first cooperation <cit.>. For parties who employ such systems, like programmers or product designers, the goal is to ensure that desired outcomes are eventually achieved; e.g. the factory shall produce a given set of items. To prescribe the precise order and message types for communication between all participants is too rigid of an approach and curtails the needed concurrency. Instead, we describe the structure of ephemeral collaboration within a smaller group of nodes so that such workflows proceed as designed; more formally, to ensure eventual consensus <cit.> on the overall workflow progression. The tool presented in this paper aims to assist the process of designing, programming, and verifying such ephemeral collaboration. For that, we apply the theory of swarm protocols <cit.> using the Actyx <cit.> middleware; we improve the usability of the tool presented in that paper by moving from an external analysis of the TypeScript <cit.> code to a type-driven internal DSL, supporting IDE content assist as well as the behavioral checking within the usual unit testing cycle. [Our running example] A machine has finished its assigned production step and requests the fleet of logistics robots to pick up the workpieces. We use an auction mechanism illustrated below to select a logistics robot without requiring full oversight. < g r a p h i c s > First, the machine broadcasts a request (including the specifics like the location and the item being picked up). Then, available and capable robots place bids (accompanied by scoring data such as estimated time arrival and energy consumption). Finally, the machine selects one of the bids. The example demonstrates all essential features: sequential transitions, cycles, and choice without unique selector (notice that both the machine and all robots can concurrently act in the auction state). In a real implementation, the workflow will be extended with timeouts, full logistic request execution tracking, or capturing on the protocol level which robot won the auction. These extension are achievable with the tool's current feature set and will merely increase the size of the state machine. §.§ Brief overview of the theory Swarm protocols <cit.> are used to formalize the communication model of the Actyx middleware: new events are appended to a log owned by the local node. These logs are eventually replicated to other Actyx nodes. A coordination-free total order of events is derived from a Lamport clock <cit.> paired with a unique identifier for the emitting node; on this basis available logs are merged, sorted, and presented for local queries. The paper then describes local agents as state machines that consume the locally available events to arrive at a state that may enable commands; events that are not expected in the current state are discarded. Invoking a command is the process by which events are created and appended to the log, eventually leading to state updates throughout the swarm. The second part of the paper introduces swarm protocols as a description of how a set of machines works together to implement a workflow (like the one in Example <ref>). Such protocol is also described as a state machine—transitions labelled with command, role, and event log—assuming synchronous execution on all nodes to determine its intended semantics. Each role can be played by any positive number of machines where the machine's state transition structure is prescribed to be equivalent to the projection of the protocol for the machine's role. Since commands are enabled for specific roles and event types are filtered for each role by a subscription, the resulting machines differ in shape across the swarm. This asymmetry implies that machines of different roles may interpret different parts of the replicated and merged event log, coming to divergent conclusions on which branches the overall workflow has taken. In other words, it is in general not the case that the logs produced by the non-coordinated execution of a set of correct machines will result in eventual consensus <cit.> among the swarm. This notion of consensus here means that there is a sequence of transitions of the swarm protocol that, when projected to the machine level, matches the local transition sequence as soon as sufficient log prefixes are replicated. The main result of the paper is that for a swarm protocol satisfying a set of well-formedness conditions, every execution of correspondingly correct machines produces event logs that achieve eventual consensus. § OVERVIEW OF THE TOOL ! The tools presented in this paper are two TypeScript libraries available via :  <cit.> contains a DSL for describing a finite state machine as well as a facility for instantiating, evaluating, and interacting with such a machine, and  <cit.> provides functions for verifying a swarm protocol's well-formed­ness as well as validating a finite state machine against the swarm role it is supposed to play. §.§ Declaring machines Declaring a state machine proceeds by first creating event factories for every event type relevant to the swarm protocol. These factories are used to construct events from a data payload and a type name, yielding a plain JavaScript object tagged with a !type! property. The factory object also represents the event type at runtime, in particular for validating whether a given received event is of this type. [name=main] const requested = Event.design('requested') .withPayload< id: string; from: string; to: string >() const bid = Event.design('bid') .withPayload< robot: string; delay: number >() const selected = Event.design('selected') .withPayload< winner: string >() Next, event factories are bundled into a !SwarmProtocol!, which provides the context in which a state machine is declared for each participating role. [name=main] const transportOrderEvents = [requested, bid, selected] as const const transportOrder = SwarmProtocol.make('transportOrder', transportOrderEvents) const OrderForRobot = transportOrder.makeMachine('robot') const TransportOrderForMachine = ... Here, !OrderForRobot! is a builder for the state machine representing the “robot” role in our swarm protocol. We proceed by first declaring its states, including their respective commands. [name=main] export const Initial = OrderForRobot.designState('Initial')(*@@*) .withPayload< robot: string >().finish() export const Auction = OrderForRobot.designState('Auction') .withPayload< id: string; from: string; to: string; robot: string; scores: Score[] >() .command((*@@*)'bid', [bid], (ctx, delay: number) => [ robot: ctx.self.robot, delay ](*@@*)) .finish() export const DoIt = OrderForRobot.designState('DoIt') .withPayload< robot: string, winner: string >.finish()(*@@*) Note how the command handler defined on line <ref> constructs the emitted !bid! event payload from both the current state and the !delay! argument: while the !ctx! argument is automatically passed in by the machine runner, the !delay! needs to be provided by the code invoking the command. The final remaining piece now is the declaration of the state machine's transitions. Each state can react to any number of event type sequences, keeping in mind that the first type in the list must be unique within that state since it selects the transition (i.e. we model deterministic state machines). [name=main] Initial.react([requested], Auction,(*@@*) (ctx, r) => ( ...ctx.self, ...r.payload, scores: [], ))(*@@*) Auction.react([bid], Auction, (ctx, b) => ctx.self.scores.push(b.payload); return ctx.self) Auction.react([selected], DoIt, (ctx, s) => (robot: ctx.self.robot, winner: s.payload.winner)) Line <ref> and similar mark the structure of the machine, e.g. that a !requested! event may take it from the !Initial! into the !Auction! state, while line <ref> and similar describe the computation of the target state payload, with !ctx! again being the current state passed in by the machine runner. Take note that the state machine requires !requested! and !selected! events to progress from state !Initial! to state !DoIt!, but there are no corresponding commands emitting these event types. As per the state diagram in example <ref> these events shall be emitted by the “machine” role, so they will show up in commands declared on !TransportOrderForMachine! once that is implemented; this is the heterogeneity that our tool natively supports. One noteworthy aspect of the API demonstrated above is that it clearly separates the persistent data model (i.e. the event types) from the ephemeral in-memory data model (i.e. the state payload). This allows the user to place related runtime information in the ephemeral state, as shown with the !robot! property that identifies the current machine within its role; an alternative model would remove this property and require the robot name to be passed into the !bid! command handler. §.§ Executing machines The code below shows how the application code instantiates a state machine. A call is made to !createMachineRunner! with a connection to Actyx, event tags to uniquely identify this workflow instance (i.e. protocol session), the initial state, and its corresponding payload. The value !'4711'! on line  <ref> is the identifier of this workflow. The value !'agv1'! assigned to the !robot! property is the identifier of the robot; take note that the last argument of !createMachineRunner! matches the type of the payload of !Initial! defined previously. [name=main] const manifest = appId: 'acm', displayName: 'acm', version: '1' const conn = await Actyx.of(manifest) // connect to local Actyx const tags = transportOrder.tagWithEntityId('4711')(*@@*) const machine = createMachineRunner(conn, tags, Initial, robot: 'agv1' ) The application program interacts with the swarm through the state machine by observing its state and invoking the offered commands. The returned state machine instance !machine! in the code above is a JavaScript !AsyncIterable! which allows interaction via the !for await (...)! loop as shown in line  <ref> below. State values such as !const state! in line  <ref> are opaque objects describing the union of all the machine's possible states. [name=main] let IamWinner = false for await (const state of machine) (*@@*) if (state.is(Auction)) (*@@*) const open = state.cast() const payload = open.payload if (!payload.scores.find((s) => s.robot === payload.robot))(*@@*) await open.commands?.bid(1)(*@@*) else if (state.is(DoIt)) (*@@*) const assigned = state.cast().payload IamWinner = assigned.winner === assigned.robot(*@@*) if (!IamWinner) break // now we have the order and can start the mission To observe or interact, the object needs to be downcast into a desired state using !.is(...)! and !.cast(...)! as shown in lines  <ref> and  <ref>, providing type-safe access to payload and commands. F.e. the command invocation on line  <ref> results in a !bid! event, whose payload is calculated by the function shown on line  <ref> and the argument provided by the caller; the event is then sent to Actyx for publication. Event publishing is asynchronous, hence we !await! the successful result of invoking the !bid()! command—if the local Actyx node weren't running then this would result in a runtime exception. Published events are then returned via the standing event subscription for the machine and consumed via the reaction handlers, possibly leading to state transitions. We decided to design the system so that unexpected events (i.e. not matching any transition) are discarded and passed to an optional hook; this hook allows the application to compensate for previous actions that have been invalidated by eventual consensus. Commands are dynamically disabled after invocation as well as while waiting for further events to complete a multi-event state transition, hence the !?.! call syntax. The async loop iterates when there is an incoming state transition, but only after the previous iteration, if any, has finished executing. The !state.is(Auction)! and !state.is(DoIt)! branches depict how the robot (i.e. the application) decides what to do when it observes the state after said transitions. The aforementioned command call is the robot placing a bid for the transport request, essentially interacting with the swarm. §.§ Analyzing swarm protocols and machines The machines declared above contain all needed information to apply our theory <cit.> (cf. Section <ref>): the machine builder DSL stores the states, commands, and event transitions corresponding to the syntactic elements of machine types. This allows the machine check library to provide two essential verification steps, ideally used within the application code test suite. The first checks if the swarm protocol given the event subscription implemented in the code achieves eventual consensus without further coordination. The second checks whether the TypeScript state machine code correctly implements its assigned role in the swarm protocol. [name=main] const protocol = (*@@*) initial: 'Initial', transitions: [ source: 'initial', target: 'auction' label: cmd: 'request', logType:['requested'], role:'machine', , source: 'auction', target: 'auction' label: cmd: 'bid', logType:['bid'], role: 'robot', , source: 'auction', target: 'doIt' label: cmd: 'select', logType:['selected'], role: 'machine', ](*@@*) const robotShape = OrderForRobot.createJSONForAnalysis(Initial)(*@@*) const machineShape = ... const subs = robot: robotShape.subscriptions, machine: machineShape.subscriptions expect(checkSwarmProtocol(protocol, subs)) .toEqual( type: 'OK' ) expect(checkProjection(protocol, subs, 'robot', robotShape)) .toEqual( type: 'OK' ) On lines <ref>–<ref> above we declare the workflow structure corresponding to Example <ref>; one array item corresponds to one transition arrow with source, target, and a label detailing role, command name, and emitted event log type. Line <ref> demonstrates how to extract the machine type from the machine implementation, yielding both the state machine graph and the inferred event subscriptions. We then use !checkSwarmProtocol! to verify the well-formedness of the protocol under the implemented subscription, which guarantees eventual consensus. If that succeeds, !checkProjection! computes the machine projection from the swarm protocol and compares it to the extracted state machine graph from TypeScript code; this structural comparison verifies that the implemented transition graph will process sequences of event types in the same way as the projected one, and that it enables the same set of commands in each visited state. Any discrepancy will be listed in an !'ERROR'! result variant, giving the programmer a clear indication of what is wrong. § VALIDATION OF THE APPROACH The tools presented above are currently being used and evaluated in the context of the EU HorizonEurope project “TaRDIS”. As part of the TaRDIS deliverables we will assess how developer productivity, confidence, and defect rate are improved over previously used tools, including centralized approaches using for example MQTT <cit.> and relational databases, or the Actyx Pond library <cit.> which uses the same underlying event dissemination technology as this tool. This assessment will be carried out using a case study and a set of experiments. The case study has already started at a machine and plant building company and involves several of that company's developers. After having implemented a working proof of concept using the Actyx Pond library (which requires manual checking of protocol correctness) they reimplement their peer-to-peer workflows using this tool, allowing comparative measurements of implementation time, effort, and defect rate. Further, they will implement new features (i.e. protocol changes) and compare the time and effort of deploying those in production with previous experience on modifying similar systems. In addition to the case study we plan to conduct experiments with groups of students at the institutes participating in TaRDIS, asking them to implement swarm participants conforming to given protocols using this tool and popular alternatives, as well as extending swarm protocols according to a feature specification. This will allow us to measure time, effort, and defect rate as well as surveying non-functional aspects like developer confidence, satisfaction, and subjective efficacy. The implementation work has been ongoing for several months already, we can preliminarily say that the more structured approach (compared to hand-rolled partial state updates in response to events) has led to productivity gains. The subjective confidence of the developers in the correctness of the written source code has increased dramatically. The latter is not unexpected since the application code previously needed to manually decide when to apply an event that arrived out-of-order and when to ignore or compensate it. Another result of the ongoing validation is that while our approach enables the programmer to recognize conflicts and if necessary compensate them, a more convenient and safer API would be desirable for handling all invalidated events of a discarded branch at once; currently each invalidated event is handled individually. § RELATED WORK Our work improves upon the established Actyx Pond library, whose “fish” abstraction only permits symmetric business logic on all participants and requires the user to handle all events at all times. Compared to the currently existing TypeScript libraries for formulating state machines (e.g. XState <cit.>), in addition to having the essential features (i.e state and transitions declaration and change observation), our tool focuses on enabling the construction of distributed state machines with built-in mechanisms for eventual-consistency. In addition to supporting behavioral type checking, our machine declaration DSL provides a very high level of type safety, e.g. capturing declared commands per state and thus avoiding invalid actions; high quality type definitions also enable high fidelity content assistance by the IDE, showing correct and detailed completions and error messages. In this brief paper we omit a comparison to state machine replication (like Paxos or Raft) because it is well-known <cit.> that strongly consistent systems become unavailable under network partitions whereas we aim for perfect availability. § CONCLUSION We present two TypeScript libraries to aid developers in implementing heterogeneous swarm systems, in particular regarding adhering to designed protocols between participants and in ensuring that the designed protocols will achieve eventual consensus in the absence of further coordination. The tools are currently used to implement the orchestration of manufacturing processes on the factory shop floor but are not specific to this application domain—they are instead tailored to heterogeneous swarm systems in general. Ascertaining the emergence of the desired overall behavior from an uncoordinated swarm system is an unsolved problem. We carve out the smaller problem of analyzing the asymmetric cooperation on non-adversarial workflows to validate correctness properties, projecting them down onto local agent code and helping the developer correctly implement and confidently modify such code. Emphasis has been placed on making full use of the capabilities of TypeScript to provide good error messages as well as precise context-sensitive completions in an IDE. The representation of swarm protocols and state machines is geared towards integration with graphical editors—for coders and managers—and users are free to implement their own behavioral type checking schemes. This work is partially funded by the European Union (TaRDIS, 101093006). We thank José Duarte and the anonymous reviewers. ACM-Reference-Format
http://arxiv.org/abs/2306.07013v1
20230612103136
Combining Reinforcement Learning and Barrier Functions for Adaptive Risk Management in Portfolio Optimization
[ "Zhenglong Li", "Hejun Huang", "Vincent Tam" ]
q-fin.PM
[ "q-fin.PM", "cs.CE", "q-fin.RM" ]
A]Zhenglong LiCorresponding Author. Email: [email protected] B]Hejun Huang A]Vincent Tam [A]Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China [B]Department of Aerospace Engineering, The University of Michigan, Ann Arbor, USA Reinforcement learning (RL) based investment strategies have been widely adopted in portfolio management (PM) in recent years. Nevertheless, most RL-based approaches may often emphasize on pursuing returns while ignoring the risks of the underlying trading strategies that may potentially lead to great losses especially under high market volatility. Therefore, a risk-manageable PM investment framework integrating both RL and barrier functions (BF) is proposed to carefully balance the needs for high returns and acceptable risk exposure in PM applications. Up to our understanding, this work represents the first attempt to combine BF and RL for financial applications. While the involved RL approach may aggressively search for more profitable trading strategies, the BF-based risk controller will continuously monitor the market states to dynamically adjust the investment portfolio as a controllable measure for avoiding potential losses particularly in downtrend markets. Additionally, two adaptive mechanisms are provided to dynamically adjust the impact of risk controllers such that the proposed framework can be flexibly adapted to uptrend and downtrend markets. The empirical results of our proposed framework clearly reveal such advantages against most well-known RL-based approaches on real-world data sets. More importantly, our proposed framework shed lights on many possible directions for future investigation. § INTRODUCTION In financial markets, only investing in a single asset brings huge uncertainties and risks once trading decisions are biased from the asset changes. To diversify investment risks, investors are suggested to allocate their capital to a set of assets with different natures during the trading period. However, as a fundamental financial problem, given a portfolio of financial products like stocks, futures, options and bonds, optimizing the ratios of assets in a portfolio to maximize returns at a low risk level is a challenge for all investors. According to the efficient market hypothesis <cit.> and the investment market is an incomplete information game, there are numerous arbitrage opportunities existing in the financial market, but meanwhile they will be immediately filled in. Thus, accurately catching the change of assets by analyzing historical data is a key feature to construct profitable trading strategies in a portfolio. Inspired by the Modern Portfolio Theory <cit.>, more advanced portfolio theories such as Capital Growth Theory <cit.> and Black-Litterman Model <cit.> are presented to adapt the actual financial markets with practical constraints. Yet in the highly volatile financial markets, the traditional theories may not generate effective trading strategies due to many strict assumptions. Over the past decade, machine learning and deep learning techniques have been introduced to manage portfolios by predicting price movements <cit.> or directly optimizing the weights of assets <cit.>. Through discovering underlying patterns from historical market data in both microeconomics and macroeconomics, those intelligent methods have achieved excess earnings than traditional methods in which the trading signals are generated by simple combinations with some handcrafted technical indicators. Furthermore, in terms of the mechanism that trading agents execute orders after interacting with financial markets, more efforts have been made recently on applying Reinforcement Learning (RL) to optimize online portfolios by observing the current states of the trading environment in real time <cit.>. However, most existing RL-based portfolio optimization methods may hardly learn an effective and stable trading strategy due to the data efficiency. More specifically, since the highly volatile financial market leads to the market style frequently changing, the trained RL agents may not achieve success on the real-time environment as the distribution of the current data may shift. This will surely increase the uncertainty of portfolios and also bring high-risk exposures. Besides, most previous RL-based methods aim to maximize the long-term profit yet less take into account the short-term risk management of a portfolio, whereas in fact that fund managers are more concerned about investment risk exposures than returns in volatile financial markets. Despite having potential returns, the risky investment may lead to a high maximum drawdown in a short period, which is unacceptable to capital holders. In addition, to balance the returns and risks, some combined performance indicators like sharpe ratio and sortino ratio integrating returns and risks are used as the optimization target, but they cannot explicitly manage portfolio risks in a single transaction. To constrain the system dynamics within safe regions on automatic driving and robotics, <cit.> introduce Barrier Function (BF) based constraint controllers to adjust decisions generated by model-free RL algorithms where any risky action will be compensated for maintaining safe states while RL agents keep exploring policies with high rewards. Yet <cit.> uses linear programming in simple cases, potentially suffering from complex constraints. <cit.> employed sum-of-square programming to restrict the exploration of RL agents in polynomial systems, albeit at the expense of increased time costs. Furthermore, those RL-BF approaches do not satisfy the formulation of risk management in portfolio optimization. Moreover, the previous works strictly control the RL actions all the time for which they lack the flexibility to adapt to different scenarios. To both explore profitable strategies and reduce risk exposures throughout the trading period, a Risk-manageable Portfolio Optimization (RiPO) framework integrating both RL-based trading agents and BF-based risk controllers is proposed in the paper to achieve high long-term profits under acceptable short-term risks. First, by formulating a portfolio management problem as a Partially Observable Markov Decision Process (POMDP), a model-free RL framework is given to explore profitable trading strategies. Second, a BF-based risk controller is constructed by the second-order cone programming in terms of risk constraints, monitoring the potential investment risks brought by aggressive RL trading strategies and then adjusting the portfolios for avoiding huge losses. In addition, considering the risk aversion of investors and different market states, two flexible mechanisms named Adaptive Risk Strategy (ARS) and Dynamic Contribution Mechanism (DCM) are proposed to adjust the strength of risk constraints and the impact of risk controllers to the overall trading strategies for adapting to different market styles. In uptrend markets, the proposed framework relaxes risk constraints to pursue higher excess returns under acceptable risk levels. Conversely, the risk exposure will be strictly constrained to avoid potentially huge losses in downtrend markets. This will enhance the flexibility of the proposed framework to invest assets in actual highly volatile financial markets. The main contributions of the proposed RiPO framework are summarized as follows: * The RiPO framework is the first attempt to integrate RL and BF-based constraint programming for financial applications. The risky trading decisions generated by RL agents can be continuously monitored and adjusted for explicitly managing the risk exposures while keeping the exploration ability of RL approaches to search for profitable strategies. * Compared with the previous RL-BF methods only tested in simple cases, the risk controller of the proposed framework combines the second-order cone programming and BF-based constraints to formulate more complex applications in actual financial markets. By modeling the relationship between investment risks and acceptable risk ranges, the potential risks are effectively reduced particularly in downtrend markets. * Instead of completely dominating RL agents by controllers the whole time, two adaptive mechanisms in the RiPO are described to flexibly adjust the impact of risk controllers in terms of investor preference and market states, which earns higher returns by loosing risk constraints in uptrend markets but strictly manages risks in downtrend markets for reducing losses. Due to the nature of financial markets, it should be pointed out that portfolio risk management is not an absolute control that manages risks under any expected level in any case. In fact, the proposed framework is expected to avoid risky investments as possible so that the maximum drawdown and overall losses can be reduced. § PRELIMINARIES §.§ Portfolio Optimization Online portfolio management is a multi-period trading strategy that the capital is periodically reallocated to the selected assets. In this work, there are two assumptions listed below. The portfolio will be only considered from long positions in this work. The turnover rate of assets in a portfolio satisfies the requirements of each order execution. Assumption <ref> implies that investors cannot short assets unless they hold the long positions, while Assumption <ref> encourages the evaluation of the proposed framework more close to reality. Based on these considerations, two primary objectives in portfolio optimization task are given, return maximization and risk minimization, respectively. Some basic financial terms are introduced as follows: (Portfolio Value) The value of a portfolio at time t can be denoted by C_t = ∑_i=1^N w_t,i p^c_t,i, where N is the number of assets in a portfolio, w_t,i is the weight of an i^th asset, and p^c_t,i is the close price of i^th asset at time t. Therefore, the portfolio is constrained based on Assumption <ref> and <ref> as ∀ w_t,i∈𝐖_t: w_t,i≥ 0 ,∑_i=1^N w_t,i=1, where 𝐖_t∈𝐖 is the weight vector 𝐖 at time t. Definition <ref> implies that the risk would be varied in terms of purposes, the corresponding covariance-weight risk from the Markowitz model <cit.> and the volatility of strategies provided a view of the short-term risk and long-term risk. (Short-term Risk) The portfolio risk at time t can be presented as below σ_p,t = σ_β + σ_α,t σ_α,t = √(𝐖^T_tΣ_k 𝐖_t) = ‖Σ_k 𝐖_t‖_2. where σ_α,t is the trading strategy risk, σ_β is the market risk and 𝐖_t∈ℛ^N × 1 is the matrix of weights. The covariance matrix Σ_k∈ℛ^N× N between any two assets can be calculated by the rate of daily returns of assets in the past k days. (Long-term Risk) The strategy volatility is used to measure the portfolio risk in the whole trading period, which is the sample variance of daily return rate of the trading strategy. (Sharpe Ratio) The Sharpe Ratio (SR) is a usual performance indicator for evaluating a portfolio with the consideration of returns R, risk-free rate r_f and portfolio risk σ, which is given as SR = R-r_f/σ. Portfolio optimization has been studied in few decades. The technical analysis methods can be concluded into four categories including Follow-the-Winner, Follow-the-Loser, Pattern Matching Approaches, and Meta-Learning Algorithms <cit.>. They try to capture the price momentum by using handcrafted financial indicators. Recently, more investors are attracted by DL/RL technique. Except for the regular price data, <cit.> introduce news data to collect extra information for the portfolio management. In terms of model structures, <cit.> present specific modules to deal with assets information independently and also capture the correlations among assets. In addition, <cit.> adjusts portfolios and optimizes trading time points to achieve the online trading in minute levels. Nevertheless, most of the studies on portfolio optimization cannot explicitly constrain the investment risk exposure when using RL-based approaches to explore profitable strategies. §.§ Barrier Function Originally inspired from Lyapunov functions, barrier function is introduced to identify safe regions and drive controllers working inside the defined safe boundaries in control theory <cit.>. Assume that a system dynamic can be denoted as s_t+1 = f(s_t)+g(s_t)a_t+d(s_t), where s_t∈ S is the system state at t, a∈ A is the action at t, f:S→ S is the nominal unactuated dynamics, g:S→ A is the nominal actuated dynamics and d:S→ S is the unknown dynamics. A safe set C={s∈ S: h(s,a)≥ 0}, can be described by the superlevel set of a barrier function h:S→𝐑 in this dynamical system, where h is a continuously differentiable function and also satisfies that ∂ f/∂ s≠ 0 when h(s)=0. According to Nagumo's Theorem <cit.>, the safe set C will be forward invariant if there exists ∀ s∈ C, Δ h(s_t,a_t)/Δ t≥ 0 , where Δ t represents a time interval, and Δ h(s_t,a_t) = h(s_t+1)-h(s_t) when considering a discrete-time barrier function. Further, considering the relaxation for safe constraints with a locally Lipschitz class 𝒦 function K such that sup_a_t∈ A [h(s_t+1)-h(s_t)+K(h(s_t))] ≥ 0, If there exists a feasible action a_t satisfying the above BF-based constraint, then the system can be expected stay at the safe state at time t+1. This can fill the gap that the explored actions generated by RL agents may not concern the status of each state due to the long-term reward expectation. § PROBLEM FORMULATION Since financial markets are influenced by many factors like unpredictable black swan events and system risks, it is difficult to collect all relevant information for a perfect investment decision. Therefore, instead of directly capturing the actual market states that are the hidden states of financial markets, the trading strategies can only rely on part of observable market data. Typically, the meta observable market data are the historical prices and volumes of each asset in a portfolio. §.§ Partially Observable Markov Decision Process For simplifying the optimization process, it is assumed that the next actual market state s_t+1 solely depends on the current actual market state s_t as: p(s_t+1| s_t,s_t-1,…,s_1)=p(s_t+1|s_t),s∈ S, where p is the conditional probability and S is a finite set of actual states. Besides, the set of meta observable state of i^th asset at timestamp t can be denoted as o_t,i^meta={ p^o_t,i,p^h_t,i,p^l_t,i,p^c_t,i,vol_t,i}, where p^o_t,i,p^h_t,i,p^l_t,i,p^c_t,i are the open/high/low/close price, and vol_t,i is the trading volume. Furthermore, except for the meta observable states, some extra technical indicators derived from the o_t,i^meta will be introduced to be a part of observable market states to help analyze underlying patterns of market trends. Define o^tech_t,i={ k_t,i,1,k_t,i,2,…,k_t,i,j}, where o^tech_t,i is a set of technical indicators, and k_t,i,j is the j^th technical indicator of i^th asset at timestamp t. Beyond that, the current account status can be observed and be considered to make reasonable trading signals, which can be denoted as o_t^a = logC_t/C_init, where C_t is the current capital and C_init is the initial capital. In general, the portfolio management process can be modeled as a POMDP that can be defined as a tuple (S,A,T,R,Ω,O,γ), where S denotes a finite set of actual market states, A is a finite set of actions, T(s_t+1|s_t, a_t) denotes a set of conditional transition probabilities between s_t+1 and s_t under the action a_t, R(s_t+1|s_t,a_t) presents the reward function, Ω indicates a finite set of observable states, O(o_t+1|s_t+1,a_t) is a finite set of conditional observation probabilities between o_t+1 and s_t+1 under the action a_t, and γ∈[0,1) is the discount factor. The objective of portfolio management is to learn the decision policy π:S→ A that can maximize the expected total rewards at all timestamps. It can be defined as J(π^*) = max_π∈Π 𝐄_τ∼π[∑_t=1^∞γ^t-1R_t], where π^* is the optimal policy, τ∼π is a trajectory under the policy π and Π is the possible policy space. To further approximate the solutions of POMDP problems, the history of previous observations up to the current timestamp can be recognized as a pseudo-state to estimate the actual state through a mapping function ϕ:ℋ→ϕ(ℋ), where ϕ(ℋ)=ϕ(H)|H∈ℋ, H_t∈ℋ is the observation history up to timestamp t, ℋ is the space of all possible observable histories <cit.>. Furthermore, the POMDP problem can be reformulated as a tuple (Ŝ,A,T̂,R̂,γ). Specifically, Ŝ=ϕ(ℋ) is the estimated actual states, T̂(ŝ_t+1|ŝ_t,a_t) is the estimated transition function in which ŝ_t+1,ŝ_t∈Ŝ and a_t∈ A, R̂(ŝ_t+1|ŝ_t,a_t) is the estimated reward function, and the decision policy π: ϕ(ℋ)→ A with π∈Π. Accordingly, the bellman equation should be reformulated as V^π(ŝ_t)= R(ŝ_t, π(ŝ_t))+ γ∑_ŝ_t+1P(ŝ_t+1|ŝ_t,π(ŝ_t))V^π(ŝ_t+1), where ŝ_t=ϕ(H_t), V^π(ŝ_t) is the expected reward in ŝ_t under the policy π. §.§ Observation and Action Another featured property of portfolio optimization is that the trading signals given by the agent will not significantly influence the trend of asset prices unless the trading volumes are large enough and the paper will not discuss such extreme cases. Thus, the market observation function can be reformulated as O(o_t+1^meta,o_t+1^tech|s_t+1,a_t)≈ O(o_t+1^meta,o_t+1^tech|s_t+1), As discussed in <cit.>, the conditional transition probability between account status o_t+1^a and a_t can be expressed as O(o_t+1^a|a_t). Then the observation transition probability can be reformulated as O(o_t+1|s_t+1,a_t ), where o_t+1=(o_t+1^meta,o_t+1^tech,o_t+1^a) and o_t+1∈Ω. Related technical indicators are listed in Appendix. Since the financial markets in different countries have different regulations. To simplify the trading behaviors, only the long position is considered in this paper. The normalized weight of i^th asset in a portfolio is defined as a_t,i, where a_t,i∈[0, 1], ∑_i=1^Na_t,i=1, and N is the number of assets in the portfolio. To close the realistic trading environment, two practical factors including transaction cost ς and slippage ξ are considered in each transaction. The reward at timestamp t can be defined as r_t=[-ς +∑_i=1^N a_t,i(p_t^c-p_t-1^c+ξ/p_t-1^c)]η, where ξ∼𝒰(-ξ_lower, ξ_upper), ξ_lower and ξ_upper are the lower and upper boundaries of slippage, and η is the scaling factor in the reward function. § METHODOLOGY §.§ Overall Framework An overview of the RiPO is depicted in Fig. <ref>. The final investment strategy in the RiPO framework comes from the RL-based trading agent and risk management module. More specifically, the risk management module includes a BF-based risk controller, a Dynamic Contribution Mechanism (DCM), and an Adaptive Risk Strategy (ARS). Initially, learned from the current market states and the optimized policy, the RL-based trading agent suggests the weights of assets in a portfolio for the next trading period. However, some suggestions may ignore short-term risks as the RL-based trading agents are expected to earn long-term profits. To balance the expected long-term returns and short-term risks, the risk controller evaluates the risk exposure of original RL-based trading strategies and dynamically adjusts the portfolio for managing the near future risk within an acceptable range. Since the financial market always changes, the risk controller should adapt to different market states for higher returns and lower risks. Thus, there are two adaptive mechanisms named DCM and ARS enhancing the flexibility of risk controller to monitor the RL-based trading agents from the perspective of the impact of risk controllers and the strength of risk constraints. The detailed steps of the RiPO framework are described in Algorithm <ref>. §.§ Barrier Function-based Risk Management with Reinforcement Learning The RL-based portfolio optimization approach has great exploration capabilities on discovering profitable strategies, but it may give unreasonable actions when the current data distribution shifts due to financial market changing. Conversely, the programming-based methods can strictly satisfy the required constraints yet lack the abilities to explore the underlying patterns from raw data. Given by that, with the integration of barrier function to constrain the portfolio risk exposure, a model-based risk controller is formulated to cooperate with RL-based trading agents for modeling online portfolio optimization problem concerning both long-term returns and short-term risks. First, the system dynamics can be written as [Ṗ_t+1^c σ_p,t+1 ]=[ Δ P_t+1; σ_β ] +[ 0; Σ_k,t+1^1/2 ]a_t, where a_t=a_t^RL+a_t^Ctrl represents the upcoming adjusted weight of assets at time t, Σ_k,t+1 is the covariance matrix, σ_p,t, σ_β are the portfolio risk and market risk as denoted in Definition <ref>. In terms of the risk-aware investment intuition, the objective of risk controllers is reducing the loss of expected profits while satisfying the risk constraint. Thus, the risk controller can be modeled as a_t^Ctrl=min_a_t^Ctrl ∑_i=1^N-a_t,i^CtrlΔ p_t+1,i s.t. h(σ_p,t+1)-h(σ_p,t)+α(h(σ_p,t))≥0, 0≤ a_t,i^RL+a_t,i^Ctrl≤1, ∀ i∈1, 2, .., N, ∑_i=1^N(a_t,i^RL+a_t,i^Ctrl)=1, where a_t^Ctrl, a_t^RL∈𝐑^N, Δ p_t+1 is the estimated price change from t to t+1 in terms of the moving average idea in this paper. For the risk constraint σ_p,t∈[0,σ_s,t], there exists an acceptable set C such that C = {σ_p,t:h(σ_p,t)≥0 }, where σ_s,t is the upper boundary of acceptable risk at t. Then, the portfolio risk can be managed within an acceptable region if it satisfies sup_a_t∈ A [h(σ_p,t+1, a_t)-h(σ_p,t)+K(h(σ_p,t))] ≥ 0. Define h=p^T s+q, (p ∈𝐑^n, q∈𝐑), let c^*=h'(σ_s,t+1,σ_β)-h(σ_p,t)+α(h(σ_p,t)). Furthermore, considering the portfolio risk constraint 0 ≤σ_p, t≤σ_s, t, the barrier function h can be redefined as h(σ_p, t)=σ_s, t-σ_p, t. The BF-based constraint can be reformulated as below σ_α,t+1 =√(a_t^T Σ_k,t+1a_t) =‖Σ_k,t+1^1/2 a_t‖_ 2 = ‖Σ_k,t+1^1/2 a_t^Ctrl+Σ_k,t+1^1/2 a_t^RL‖_ 2 ≤ c^*, where Σ_k,t+1 is calculated by the historical price series and the estimation of Δ p_t+1, and Σ_k,t+1^1/2a_t^RL is a constant. Thus, the risk controller performs as a second-order cone program. More detailed derivation is given in the Appendix. After collecting the compensating adjustment a_t^Ctrl of portfolios from the BF-based controller, the final trading decision a_t would be made for the next trading period. The rewards r_t and its action a_t are stored into the memory of RL algorithms for further training, which can promote RL training efficiency for reaching optimal policy. §.§ Dynamic Contribution Mechanism Intuitively, the risk preferences of investors are not fixed the whole trading period in actual financial markets. Particularly, they are willing to take higher investment risks for gaining higher returns in uptrend markets. Therefore, more aggressive and risky investment strategies are allowed at the moment. Conversely, when a downtrend market appears, most investors prefer to strictly constrain the portfolio risk to avoid huge losses even though there will miss some profitable opportunities. Originally inspired by <cit.> using non-linear transformation to balance exploration and exploitation at different optimization stages, a dynamic contribution mechanism is introduced to adaptively regulate the impact of risk controller at each transaction by considering the strategy risk exposures and investor preferences, which balances the exploration in RL-based agents and exploitation in risk management. To be more specific, according to the near performance of trading strategies, a scaling factor λ_t∈[0,1] is given by a non-linear transformation to update the contributions of risk controllers to the final trading signals. The trading strategy with greater losses will be subject to tighter risk constraints to avoid aggressive investment decisions from RL-based agents. λ_t ={ ( m+G)^(1-G), R_s-r_f <0, m, Otherwise, . where G=min(|R_s-r_f|/v, 1), m∈[0,1] denotes the minimal impact of risk controllers to the overall trading strategy. The higher m represents more strict risk management in which the λ_t will be larger at the same loss of strategies. Note that, the risk requirements are strictly constrained at the whole trading period when m=1 such that λ=1. v∈(0,1] represents the risk appetite of investors. In terms of qualitative analysis, the lower v (v→ 0) has less tolerance on investment risk. It means that the small losses in a short-term period will trigger strict risk control. R_s is the recen performance of trading strategies. Specifically, R_s is given by the moving average of daily returns of trading strategies in this paper. Furthermore, the final trading decision at time t is revised as a_t=a_t^RL+λ_t a_t^Ctrl. where a^RL_t satisfies ∑_i=1^N a^RL_t,i=1 and a^Ctrl_t satisfies ∑_i=1^N a^Ctrl_t,i=0. Thus, the sum of a_t adds up to 1. §.§ Adaptive Risk Strategy As the strict risk management would lead to trading agents missing some potentially profitable opportunities in uptrend markets, the strength of risk constraint in the BF-based controller should be dynamic in terms of the risk preferences of investors and current financial market states in which investors expect lower investment risks in downtrend markets while allowing relatively high risks in uptrend markets for earning higher profits. Therefore, a simply yet efficiently adaptive risk strategy is introduced to enhance the adaptability of the proposed framework to the actual financial market. Considering the balance between expected returns and acceptable risks, the adaptive risk upper boundary σ_s,t+1 for the BF-based risk constraint is shown below. σ_s, t+1 = { σ_s, min, R̅_t+1∈(-∞,(1-μ) r_f), M R̅_t+1+b, R̅_t+1∈[(1-μ) r_f,(1+μ) r_f], σ_s, max, R̅_t+1∈((1+μ) r_f,+∞), . where M=σ_s, max-σ_s, min/2 μ r_f, b=(1+μ) σ_s, min-(1-μ) σ_s, max/2 μ, R̅_t+1 is the expected returns, σ_s,min and σ_s,max are the minimum and maximum values of σ_s,t+1, μ is the user-defined factor representing investor’s aversion to future risk. The smaller μ would lead risk constraints to be more sensitive to the fluctuation of trading performance in which a more strict risk requirement is assigned to trading agents in downtrend markets. The linear transformation derivation of σ_s is described in the Appendix. Yet there may be no optimal solution to satisfy the strict risk constraint in terms of the current market situation. Thus, the risk constraint will be iteratively relaxed by a certain step size until the risk controller collects feasible solutions or meeting stop criteria. § EXPERIMENTS To carefully examine the performance of the proposed RiPO framework, the stock datasets with different market styles are selected to evaluate the RiPO and compared methods. There are three concerned questions in Section <ref>. §.§ Experimental Settings Datasets: To evaluate the performance of methods in the real financial market, the daily OLHCV data of constitute stocks of S&P500 index in the U.S. market is collected from https://finance.yahoo.comYahoo Finance. The top 10 stocks are selected to construct a portfolio in terms of the company capital of stocks. The top 10 stocks, accounting for over 26% of the S&P500’s market capital, reflect U.S. market trends and provide high liquidity, satisfying the turnover rate assumption. Considering that most of the portfolio optimization methods may fail at the changes of market styles due to the data distribution shifts, all compared methods will be tested on two market style settings. As defined in Table <ref>, the three subsets of the MS-1 represent an uptrend financial market, aiming to compare the exploration ability of methods to search profitable strategies. On the other hand, the training set of the MS-2 depicts a relatively stationary market, but the validation data and test data are in the highly volatile and downtrend market due to the COVID-19 pandemic, which evaluates the risk management ability of methods when meeting unexpected crises. Comparative Methods: In terms of investment principles like follow-the-winner, pattern matching, and DL/RL, seven methods are selected to compare with the proposed framework in this paper. They are Constant Rebalanced Portfolio (CRP, Neutral) <cit.>, Exponential Gradient (EG, Follow-the-Winner) <cit.>, Online Moving Average Reversion (OLMAR, Follow-the-Loser) <cit.>, Passive Aggressive Mean Reversion (PAMR, Follow-the-Loser) <cit.>, Correlation-driven Nonparametric Learning Strategy (CORN, Pattern Matching) <cit.>, Ensemble of Identical Independent Evaluators (EIIE, DL/RL) <cit.>, portfolio policy network (PPN, DL/RL) <cit.>, Relation-Aware Transformer (RAT, DL/RL) <cit.>, and original Twin Delayed DDPG (TD3, DL/RL) <cit.>. Metrics: To evaluate the returns and risks of compared methods, there are three common metrics in both academia and industry applied to measure the performance of trading strategies: * Annual Return: AR=((1+TR)^252/T-1), where TR is the returns over the trading period and T is the number of trading days. * Maximum Drawdown: MDD= max_t_1 <t_2C_t_1-C_t_2/C_t_1, where C_t_1 and C_t_2 are the portfolio value at time t_1,t_2. * Sharpe Ratio: SR=AR-r_f/σ_v, where σ_v is the strategy volatility in terms of daily returns, and r_f is the risk-free rate. Particularly, the SR is assigned to 0 when the annual return is negative. Implementation Details: TD3 is one of the popular RL algorithms and is selected to train RL-based trading agents in the RiPO framework. The implementation of the TD3 algorithm are referred by <cit.> while the other baseline algorithms use default settings according to their papers. The detailed RiPO settings of experiments are given in the Appendix. Besides, for the fair comparison and avoiding future data leakage, the validation set are used to fine-tune the hyper-parameters while the test set is applied to compare the performance of methods. Furthermore, all experiments are repeatedly run for 10 times to avoid the randomness and bias of experiments caused by seeds. The average value and standard deviation of all methods are compared. Additionally, the Wilcoxon rank-sum test <cit.> is used to compare the statistical significance of the proposed framework against the compared approaches with a significance level at 0.05. §.§ Performance Comparison and Analysis Q1: How does the RiPO framework perform in terms of profitability and risk exposure in uptrend and downtrend markets? As shown in Table <ref>, there are seven approaches achieving positive returns in the MS-1 dataset, among which the returns of the RiPO framework are at least 5% higher than other methods while performing relatively low MDD at around 22%. Although some baseline algorithms like PPN and RAT have lower MDD, their profitability is limited and some profitable strategies may be missed. The best SR achieved at 0.72 by the RiPO method demonstrates the capability to balance the profits and risk exposures of trading strategies. It reveals that the RiPO framework would dynamically relax the risk constraint within an acceptable range to pursue higher profits in uptrend markets. When testing in the MS-2 dataset, none of these baseline methods are profitable during the trading period. They may lose around 13% to 37% of the portfolio value for each year. Meanwhile, those methods have higher short-term risks in which investors may suffer a huge loss in a short period. Compared with baseline methods, the RiPO framework only loses 6.58% for each year while the MDD can be significantly reduced to 25.77% against the MDD of other methods that are over 48%. It demonstrates the outstanding ability of the RiPO to manage the risk in downtrend markets for avoiding great losses. Especially when encountering unexpected crises, the trading agents may not suggest reasonable trading signals where the risky investment should be strictly constrained. Besides, the significant test results (better/equal/worse) are 4/5/0 and 9/0/0 in the MS-1 and MS-2, respectively. It is evident that the RiPO achieves remarkable performance in the MS-2 when compared to other baselines. More importantly, the RiPO framework integrating risk controllers and TD3 as trading agents outperforms the original TD3 approach both in the MS-1 and MS-2 markets in terms of AR, MDD, and SR. This clearly demonstrates the effectiveness of the proposed risk controller to manage the potential investment risks due to the aggressive trading strategies generated by RL-based agents. Q2: Can the RiPO framework effectively reduce downside risk? The downside risk is one of the most concerns for investors to manage portfolios. Fig. <ref> and Fig. <ref> show the portfolio value and short-term risk of compared methods in the test period. As highlighted in the red rectangular boxes, although the S&P500 (black dash line) has a significant decline, the risky investments of the RiPO framework (red line) are strictly constrained to avoid huge losses while the portfolio value of other frameworks suffers a great loss. Besides, the short-term risks of the RiPO are carefully managed at a very low level than that of other methods during the whole downtrend period (see the red rectangular in Fig. <ref>), which further proves the outstanding capability of the RiPO to manage investment risks. Q3: How do hyper-parameters that reflect the risk appetite of investor affect the RiPO framework? As described in the previous sections, there are three key hyper-parameters of the adaptive mechanisms DCM and ARS reflecting investors’ preferences to balance the strength of risk management and the exploration of trading strategies. As shown in Table <ref>, the higher m performs better AR (from -9.43% to -6.58%) and lower MDD (from 29.47% to 25.56%) by enhancing the impact of risk controller to the RL-based agents in downtrend markets. However, there may miss some profitable opportunities when m is set to a high value (i.e., m=1). The AR decreases to -6.69% as most risky investments are restricted. Similarly, the lower v represents the higher risk aversion, which reduces the MDD from 32.63% to 25.77% and avoids half of the loss when v=0.005 by tightening risk exposures. Moreover, using the appropriate scaling factor μ at 3 in the MS-2 encourages the RiPO framework to balance the expected returns and short-term risks by dynamically adjusting the strength of risk constraints. Q4: How do the adaptive mechanisms DCM and ARS impact the RiPO framework? To enhance the flexibility of risk management in different financial markets, the DCM and ARS are introduced to adjust the impact of risk controllers and the strength of risk constraints. Table <ref> shows that the ARS significantly reduces the losses from 23.32% to 6.69% when without using the DCM and from 18.76% to 6.58% when integrating the DCM. Meanwhile, the risk exposure decreases to half of the setting without the ARS. On the other hand, the RiPO framework involving the DCM can capture more potential profitable opportunities to avoid greater losses in which the short-term risks are managed at lower or similar levels by considering the tradeoff between long-term returns and short-term risks. The experimental results clearly reveal that the two adaptive mechanisms encourage the RiPO model to dynamically adjust risk constraints for both managing risk exposures and exploring profitable strategies. § CONCLUSION In the paper, a novel risk-manageable portfolio optimization framework named RiPO is proposed explicitly to manage the short-term risks while expecting the long-term profits in different market styles. With the cooperation of RL approaches and barrier function based risk controllers, the RiPO performs strong exploration ability to optimize trading strategies under acceptable risk constraints. Besides, two dynamic modules are given to construct a flexible risk controller for adapting financial markets and investors' risk appetite. The experimental results indicate that the RiPO can gain higher profits in uptrend markets and manage downside risks in downtrend markets. In the future, the flexibility of risk controller can be further enhanced for adapting to different financial markets and handling more real market constraints. § TECHNICAL INDICATORS Except for the open/high/low/close price and volume data, more advanced financial indicators are provided as market states in the RiPO framework to discover underlying trend patterns. All involved technical indicators are listed as below with the format {Indicator name}-{Observation window size}. AD: Chaikin A/D Line ADOSC: Chaikin A/D Oscillator ADX: Average Directional Movement Index ADXR: Average Directional Movement Index Rating APO: Absolute Price Oscillator AROON: Aroon AROONOSC: Aroon Oscillator ATR-6/14: Average True Range-6/14 BBANDS: Bollinger Bands BOP: Balance Of Power Change-open/high/low/close/volume CCI-5/10/20/88: Commodity Channel Index-5/10/20/88 CMO-open/close-14: Chande Momentum Oscillator-open/close-14 DEMA-6/12/26: Double Exponential Moving Average-6/12/26 DX: Directional Movement Index EMA-6/12/26: Exponential Moving Average-6/12/26 KAMA: Kaufman Adaptive Moving Average MA-close-5: Moving Average-close-5 MACD: Moving Average Convergence/Divergence MEDPRICE: Median Price MiNUSDI: Minus Directional Indicator MiNUSDM: Minus Directional Movement MOM: Momentum NATR: Normalized Average True Range OBV: On Balance Volume PLUSDI: Plus Directional Indicator PLUSDM: Plus Directional Movement PPO: Percentage Price Oscillator ROC-6/20: Rate of change-6/20 ROCP-6/20: Rate of change Percentage-6/20 ROC-volume-6/20: Rate of change-volume-6/20 ROCP-volume-6/20: Rate of change Percentage-volume-6/20 RSI: Relative Strength Index SAR: Parabolic SAR TEMA-6/12/26: Triple Exponential Moving Average-6/12/26 TRANGE: True Range TYPPRICE: Typical Price TSF: Time Series Forecast ULTOSC: Ultimate Oscillator WILLR: Williams' %R § DERIVATION OF THE RISK CONTROLLER The system dynamics and the risk controller are described in Section 4 of the paper, of which the barrier function-based risk constraint can be expressed as h(σ_p, t+1)-h(σ_p, t)+α(h(σ_p, t)) ≥ 0. To manage the investment risk within a given acceptable range, the portfolio risk σ_p, t at time t is constrained as the inequality: 0 ≤σ_p, t≤σ_s, t, where σ_s, t is the upper boundary of the acceptable risk. Correspondingly, the barrier function h can be defined as h(σ_p, t)=σ_s, t-σ_p, t. For the acceptable risk σ_p,t∈[0,σ_s,t], there exists an acceptable set C such that C={σ_p, t: h(σ_p, t) ≥ 0}. Considering an extended class 𝒦_∞ function α(h(σ_p, t))=η h(σ_p, t), η∈[0,1], and the portfolio risk consists of the strategy risk σ_α, t and market risk σ_β, t, then the risk constraint can be further rewritten as σ_s, t+1-σ_α, t+1-σ_β, t+1+(η-1)(σ_s, t-σ_α, t-σ_β, t) ≥ 0. Furthermore, with the consideration of using an adaptive risk strategy in the RiPO framework, the expected short-term risk σ_p, t+1 at time t+1 can be managed within the acceptable range. If there exists an adjusted action a_t^Ctrl from the risk controller, such that σ_α,t+1 =√(a_t^T Σ_k,t+1a_t) =‖Σ_k,t+1^1/2 a_t‖_ 2 = ‖Σ_k,t+1^1/2 a_t^Ctrl+Σ_k,t+1^1/2 a_t^RL‖_ 2 ≤σ_s, t+1-σ_β, t+1+(η-1)(σ_s, t-σ_α, t-σ_β, t) ≤σ_s, t+1-σ_β, t+1+(η-1)(σ_s, t- Σ_k, t^1/2 a_t-1_2-σ_β, t). Let A=Σ_k, t+1^1/2, b=Σ_k, t+1^1/2 a_t^R L, c^*=σ_s, t+1-σ_β, t+1+(η-1)(σ_s, t-Σ_k, t^1/2 a_t-1_2-σ_β, t), and d=0. Referred by the definition of second-order cone programming (SOCP) of the form A x+b_2≤ d^T x+c^*, the barrier function-based risk controller can be described as a SOCP problem with the linear objective function. § DERIVATION OF THE LINEAR TRANSFORMATION IN ADAPTIVE RISK STRATEGY The Adaptive Risk Strategy (ARS) module is introduced to adjust the upper boundary σ_s,t+1 of acceptable risk ranges. Too small σ_s,t+1 leads to the risk controller being unsolvable while too large σ_s,t+1 may not efficiently manage investment risks. Therefore, considering the current financial market states and the risk appetite of investors, there are three cases to adjust the acceptable risks σ_s,t+1. When the expected return R̅_t+1 is lower than the user-defined lower boundary (1-μ)r_f, the σ_s,t+1 will not decline and be fixed at σ_s, min. On the other hand, the σ_s,t+1 will be fixed at σ_s, max for avoiding too large values when R̅_t+1 exceeds the user-defined upper boundary (1+μ)r_f. Lastly, when the R̅_t+1∈[(1-μ) r_f,(1+μ) r_f], a linear transformation is designed for the adjustment of σ_s,t+1. Define the linear transformation function σ_s,t+1 = MR̅_t+1 + b, (M, b ∈𝐑). There are two points ((1-μ) r_f, σ_s, min) and ((1+μ) r_f, σ_s, max) on the linear function. Then, the following two equations can be obtained. {[ σ_s, min=M(1-μ) r_f+b; σ_s, max=M(1+μ) r_f+b ]. where σ_s,min and σ_s,max are the minimum and maximum values of σ_s,t+1, μ is the user-defined factor representing investor’s aversion to future risk, and r_f is the risk-free rate. After solving the above two equations, it has {[ M=σ_s, max-σ_s, min/2 μ r_f; b=(1+μ) σ_s, min-(1-μ) σ_s, max/2 μ ]. Lastly, when R̅_t+1∈[(1-μ) r_f,(1+μ) r_f], the linear transformation function can be described as below. σ_s=σ_s, max-σ_s, min/2 μ r_fR̅_t+1+(1+μ) σ_s, min-(1-μ) σ_s, max/2 μ. § EXPERIMENTAL SETTINGS In the RiPO framework, the TD3 method is integrated into the RiPO for training the RL-based trading agents. More specifically, the policy network is a two-layer architecture with 400 and 300 hidden units, respectively. The learning rate is 10^-5, and the memory size is 10^6. The RL model is updated every 400 steps with the 50 batch size. The training episode of RL algorithms is 500. Besides, the policy and target networks are updated every 2 steps per training step, and the standard deviation of Gaussian noise added to the target policy is 0.2. Additionally, the η in an extended class 𝒦_∞ function α is set to 0.3. To balance the returns and risks, the risk appetite of investors v and the minimal impact of risk controllers m in the dynamic contribution mechanism default to v=0.5 and m=0 in the MS-1 dataset, and v=0.005 and m=0.8 in the MS-2 dataset. In the adaptive risk strategy, the scaling factor μ is set to 2 in the MS-1 dataset and 1 in the MS-2 datasets. The minimum and maximum values of σ_s, t+1 are set to σ_s,min=0.01 and σ_s,max=0.02 in the MS-1 dataset, and σ_s,min=0.01 and σ_s,max=0.015 in the MS-2 dataset. Moreover, in terms of different market styles, the window sizes for observing recent strategy performance to estimate the expected returns are 3 days and 5 days for MS-1 and MS-2 datasets, respectively. Furthermore, to simulate the trading in real markets, the risk-free rate is 1.6575% referred by the U.S. 5-year bond yield. Initially, the capital is 1000000, and the market risk is assumed to σ_β=0.001. Also, the transaction cost is set to 0.1%, while the slippage defaults to 0.1%. Moreover, the observation window size of covariance calculation is 21 days (i.e., a month) based on the usual practice of investors. For the calculation of the yearly performance indicators, the number of trading days for a year defaults to 252. All algorithms are implemented by Python on a desktop computer installed with the 2 NVIDIA RTX 3090 GPUs, and all experiments are run for 10 times to avoid the bias of results caused by random seeds. § IMPLEMENTATION CODE OF THE RIPO FRAMEWORK The implementation code of the RiPO framework and the S&P500 market data are attached in the supplemental materials. Please contact the authors for the implementation.
http://arxiv.org/abs/2306.05564v1
20230608211646
SMEFT probes in future precision DIS experiments
[ "Chiara Bissolotti", "Radja Boughezal", "Kaan Simsek" ]
hep-ph
[ "hep-ph" ]
We analyze the potential of future high-energy deep-inelastic scattering (DIS) experiments to probe new physics within the framework of the Standard Model Effective Field Theory (SMEFT). We perform a detailed study of SMEFT probes at a future Large Hadron-electron Collider (LHeC) and a Future Circular lepton-hadron Collider (FCC-eh) machine, and extend previous simulations of the potential of a Electron-Ion Collider (EIC) to include Z-boson vertex corrections. Precision Z-pole constraints on vertex corrections suffer from numerous degeneracies in the Wilson-coefficient parameter space. We find that both the LHeC and the FCC-eh can help remove these degeneracies present in the existing global fits of precision Z-pole observables and LHC data. The FCC-eh and LHeC will in many cases improve upon the existing precision electroweak bounds on the SMEFT parameter space. This highlights the important role of precision DIS measurements for new physics studies. SMEFT probes in future precision DIS experiments Chiara Bissolotti^1, Radja Boughezal^1, Kaan Simsek^1,2 ^1 Argonne National Laboratory, Lemont, IL, USA ^2 Northwestern University, Evanston, IL, USA First Draft: December 5, 2022 This Draft: April 24, 2023 ================================================================================================================================================================================================== § INTRODUCTION The accomplishments of the Standard Model (SM) are many. With the discovery of the Higgs boson in 2012, the predicted particle spectrum in the SM is complete. However, the SM suffers from several shortcomings. The dark matter observed in the universe is not contained in the SM, nor are the mechanisms responsible for the baryon-antibaryon asymmetry and neutrino masses. Moreover, the SM contains numerous aesthetic issues, such as the electroweak hierarchy problem and the extreme hierarchy between fermion Yukawa couplings. A more complete and compelling theory is desirable. However, there has so far been neither conclusive evidence for new particles beyond the SM (BSM) nor any definitive deviation from SM predictions. In an attempt to address these lingering issues in our understanding of Nature, many experiments have been launched or are under design. In this work, we consider the BSM potential of several proposed future electron-proton/deuteron deep-inelastic scattering (DIS) experiments: the Large Hadron-electron Collider (LHeC) <cit.>, the Future Circular lepton-hadron Collider (FCC-eh) <cit.> and the Electron-Ion Collider (EIC) <cit.>. The LHeC is a proposed upgrade of the Large Hadron Collider (LHC). It would operate alongside the LHC in order to utilize the LHC proton and ion beams. The earliest realistic operational period is estimated to be 2032, which coincides with the LHC Run 5 period. The integrated luminosity of the LHeC is projected to be of the order of 100 ^-1. It will operate at center-of-mass (CM) energies reaching 1.5 TeV. It is designed to provide novel measurements in QCD, investigate DIS physics at low Bjorken-x values, improve upon existing electroweak (EW) physics measurements, and probe BSM physics. The FCC-eh would occur at a new accelerator complex at CERN, and would feature center-of-mass energies approaching 3.5 TeV and integrated luminosities in the inverse attobarns <cit.>. Like the LHeC it will feature a broad physics program spanning QCD and electroweak measurements to new physics searches. The EIC is a United States Department of Energy project that will be constructed at Brookhaven National Laboratory (BNL). The EIC will be the first high-energy DIS machine that collides polarized electrons with polarized protons. It is anticipated to commence operating within a decade. The EIC is designed to collide a polarized electron beam of energy 5 to 18 GeV with polarized proton beams of energies 41 to 275 GeV, with polarized light ions of energies up to 166 GeV, and with unpolarized heavy ions of energies up to 110 GeV. It will run at CM energies between fixed-target-scattering and high-energy colliders, namely 70 to 140 GeV. It will improve the extraction of parity-violating (PV) DIS asymmetries in EW neutral-current (NC) cross section with reduced uncertainties from luminosity and detector acceptance/efficiency. Our goal in this work is to study the BSM potential of the LHeC, FCC-eh and the EIC with a detailed accounting of anticipated uncertainties. We consider the neutral-current (NC) DIS cross section as our observable at the LHeC, following previous studies of electroweak physics at the LHeC <cit.> and FCC-eh <cit.>. At the EIC we focus on PV asymmetries, following earlier studies of BSM physics at the EIC <cit.>. Since there has been no conclusive sign of new particles beyond the SM yet, we perform our analysis within the framework of the Standard Model Effective Field Theory (SMEFT) (see <cit.> for a review of the SMEFT). In the SMEFT, one builds higher-dimensional operators using the existing SM particle spectrum. All new physics is assumed to be heavier than the SM states, as well as the accessible collider energies. The leading order basis of the SMEFT for on-shell fields has been completely classified up to dimension-12 <cit.>. In this work, we restrict ourselves to dimension-6 (there is a lepton-number violating operator at dimension-5, which is irrelevant to our study). Previous work has shown that DIS measurements at the EIC and in low-energy fixed target experiments can resolve blind spots in the semi-leptonic four fermion Wilson coefficient space that remain after Drell-Yan measurements at the LHC <cit.>, and that EIC measurements of single-spin asymmetries can competitively probe Wilson coefficients of dipole operators <cit.>. We consider here the full spectrum of Wilson coefficients that can alter the DIS process at leading-order in the SMEFT loop expansion. These include both semi-leptonic four fermion Wilson coefficients and Z-boson vertex correction factors, for a total of 17 Wilson coefficients at leading order in the SMEFT loop expansion. It is traditionally assumed that the vertex corrections are best measured with Z-pole precision EW observables at LEP and SLC. However, due to the limited kinematic information available from Z-pole data there are numerous degeneracies between the Wilson coefficients. This was illustrated for example in <cit.>, where in addition contributions from existing LHC data were also considered. The obtained constraints on the Wilson coefficients can become an order of magnitude weaker when the full spectrum of Wilson coefficients is activated, as compared to turning on only a single coefficient. We show here that the future DIS measurements can help resolve these degeneracies. This and other key aspects of our study are summarized below. * We find that the LHeC and FCC-eh can significantly extend the search reach for semi-leptonic four fermion operators. While the EIC can probe the SMEFT operators of interest to a few TeV, the LHeC and FCC-eh can exceed 10 TeV. We thoroughly study different beam energy, polarization options, luminosity assumptions and lepton species choices at all three colliders. We find that no single choice probes the entire SMEFT parameter space, and that a full spectrum of run scenarios is needed to fully explore the physics possibilities beyond the SM. * We find that the option of a positron beam in future DIS experiments can significantly extend there reaches in certain sectors of the Wilson coefficient parameter space, due to the structure of the underlying matrix elements. * It is often assumed that the most stringent constraints on universal shifts of the Z-boson vertex couplings to fermions are obtained from fits to the precision Z-pole observables. While this is true when only a single Wilson coefficient is turned on, when several are activated simultaneously numerous degeneracies arise, as demonstrated in <cit.>. We show that the LHeC and FCC-eh can improve upon the existing bounds on the Z-boson couplings by resolving these degeneracies. This manuscript is organized as follows. In Section <ref>, we briefly review the relevant formalism of the SMEFT and DIS and also define our observables of interest. In Section <ref>, we describe our LHeC, FCC-eh and EIC pseudodata sets, as well as anticipated uncertainties. We also detail our numerical procedure for fitting of the SMEFT parameters. In Section <ref>, we present the results of the fits. We conclude in Section <ref>. Details regarding the construction of the error matrix and the generation of pseudodata are given in the Appendix. § REVIEW OF THE FORMALISM §.§ Review of the SMEFT formalism The SMEFT is a model-independent extension of the SM Lagrangian in which one builds operators of dimension higher than four, O_k^(n), using the existing spectrum of the SM and assuming the SM gauge symmetries. The Wilson coefficients associated with these operators are denoted as C_k^(n). These effective couplings are defined at a UV cut-off scale, Λ, which is assumed to be heavier than all the SM states and all accessible collider energies. The Lagrangian takes the form _ SMEFT = _ SM + ∑_n > 41 Λ^n-4∑_k C_k^(n) O_k^(n) In this work, we restrict ourselves to operators of dimension-6. There is only a single lepton-number violating operator at dimension 5, which is irrelevant to our analysis of lepton-number conserving observables. We note that we linearize our observables in the Wilson coefficients. We will see later that the results obtained justify this assumption. There are 17 operators that affect NC DIS matrix elements at leading order in coupling constants <cit.>, which are summarized in Table <ref>. Here, φ is the SU(2) Higgs doublet, ℓ and q are the left-handed lepton and quark doublets, and e, u, and d are the right-handed electron and up- and down-quark singlets, respectively. The τ^I are the Pauli matrices and the double-arrow covariant derivative is defined such that φ^† i ↔D_μφ = φ^† i D_μφ + h.c. φ^† i ↔D_μτ^I φ = φ^† i D_μτ^I φ + h.c. We suppress flavor indices and assume flavor universality in our analysis for simplicity. We remark that operators containing scalar and dipole fermionic bilinears are discarded in our analysis. Such vertex factors produce cross section contributions proportional to fermion masses, which are small and are neglected here. We note that SMEFT loop corrections are expected to be sub-dominant to the next-to-leading order (NLO) QCD corrections. Since we include the NLO QCD corrections in our study and find that they do not have a large effect on the obtained results, we assume that the higher-order terms in the SMEFT loop expansion can be safely neglected. §.§ Review of the DIS formalism In our analysis we study NC DIS in the process ℓ + H →ℓ' + X, where ℓ is an electron or a positron, H can be a proton or a deuteron, and ℓ' and X are the final-state lepton and hadronic systems, respectively. Charged current DIS involves missing energy and consequently the reconstruction of hadronic final states in order to determine the kinematic variables. It therefore receives different and typically larger systematic uncertainties. For this reason we do not consider this mode in our study. We include next-to-leading order QCD corrections to both the SM and the SMEFT corrections. At leading order in the perturbative QCD expansion, this process can be mediated by single photon or Z-boson exchange or by the SMEFT contact interaction of leptons with quarks. The LO Feynman diagrams are presented in Figure <ref>. The NLO QCD corrections to the SM process are well known <cit.>. These corrections modify only the quark lines, as illustrated in Fig. <ref>. Therefore the corrections are identical for both SM and SMEFT cross sections. It is convenient to express the DIS cross sections in terms of structure functions. The NC DIS cross-section expressions for collisions of a lepton ℓ with an unpolarized or polarized hadron are given in terms of the NC structure functions F_1,3,L^ NC and g_1,5,L^ NC by 2σ^ℓ_ NCx̣Q̣^̣2̣ = 2 πα^2 x Q^4 [1+(1-y)^2] 2x F_1^ NC + (ℓ) [1-(1-y)^2]x F_3^ NC + (1-y) 2 F_L^ NC and 2Δσ^ℓ_ NCx̣Q̣^̣2̣ = 8 πα^2 x Q^4 [1+(1-y)^2] xg_5^ NC - (ℓ) [1-(1-y)^2] xg_1^ NC + (1-y) g_L^ NC. where is the particle signum function that returns +1 for particles and -1 for antiparticles. Q is the usual DIS momentum transfer, x is the Bjorken variable, and y is the inelasticity parameter. These are defined as usual for the DIS process. We define the reduced cross sections as 2σ_r, NC^ℓx̣Q̣^̣2̣ = 2 πα^2 x Q^4 [1 + (1-y)^2] ^-12σ^ℓ_ NCx̣Q̣^̣2̣, 2Δσ_r, NC^ℓx̣Q̣^̣2̣ = 4 πα^2 x Q^4 [1 + (1-y)^2] ^-12Δσ^ℓ_ NCx̣Q̣^̣2̣. From this point onward, when we mention cross sections, we mean the reduced ones and denote them simply by (Δ)σ_ NC. In Fig. <ref>, we show the NC DIS cross section with NLO QCD corrections for e^-p collisions at √(s) = 1.3 with right-handed (RH) electrons of polarization P_ℓ = +80% and the corresponding k factors as a function of Q for various x values. We observe that the NLO QCD corrections to the NC DIS cross section are 30% at most. They exhibit high sensitivity to Q and low sensitivity to x for Q ≲ 30, and low sensitivity to Q and high sensitivity to x for higher values of Q. We have compared the Wilson coefficient constraints obtained using NLO QCD structure functions with those obtained using LO QCD, and have found very similar results. We therefore believe that the neglect of QCD corrections at the NNLO level and beyond are justified in our analysis. §.§ Observables of interest The observable of interest at the LHeC and FCC-eh is the NC DIS cross section, σ_ NC, of unpolarized protons/deuterons with electrons or positrons of various polarizations. We choose this observable in order to compare our simulated pseudodata with previous studies in the literature <cit.>. For the EIC we consider PV asymmetries in cross sections of polarized electrons with either polarized or unpolarized protons/deuterons. Previous studies have shown that this asymmetry at the EIC generically provides somewhat more sensitivity to BSM effects than asymmetries with polarized protons or lepton-charge asymmetries <cit.>, so we focus on this case here. We define the unpolarized PV asymmetry by A_ PV = σ_ NC^+ - σ_ NC^- σ_ NC^+ + σ_ NC^- . and the polarized one by Δ A_ PV = Δσ_ NC^0 σ_ NC^0 . Here, σ_ NC^± is the unpolarized NC DIS e^-H (H=p,D) cross section evaluated with λ_ℓ = ± P_ℓ, σ_ NC^0 is the same as σ_ NC^± but with λ_ℓ = 0, and Δσ_ NC^0 is the same as σ_ NC^0 but with a polarized hadron. P_ℓ is the assumed value for the lepton beam polarization at the EIC. We linearize the SMEFT expressions in this study. Thus, the SMEFT observables have the generic form 𝒪 = 𝒪^ SM + ∑_k C_k δ𝒪_k + 𝒪 (C_k^2) where k runs over the active Wilson coefficients, 𝒪 = σ_ NC or A_ PV is the observable, and δ𝒪_k is the SMEFT correction to the observable proportional to the Wilson coefficient C_k. § DESCRIPTION OF THE ANALYSIS §.§ Description of the pseudodata For our analysis we use the most recent publicly available LHeC pseudodata sets <cit.>, as well as the EIC data set found to be most sensitive to SMEFT Wilson coefficients in <cit.>. For the FCC-eh we generate pseudodata sets following the procedure established in <cit.>, with the FCC-eh run parameters found in <cit.>. We refer to the pseudodata sets as data sets from this point onward. In Table <ref>, we summarize the configurations of these data sets in terms of beam energies, lepton beam polarizations, and total integrated luminosities, together with our labeling scheme and also the observable of interest. Note that in this work, we do not consider a possible 10-fold-high-luminosity scenario of the EIC. We also consider joint LHeC and FCC-eh fits that combine all run scenarios for each experiment listed below. We restrict ourselves to bins that satisfy x≤0.5, Q≥10, and 0.1≤ y≤ 0.9. We introduce these additional cuts to avoid large uncertainties from nonperturbative QCD and nuclear dynamics that occur at low Q and high x, where we expect SMEFT effects to be reduced. We call the bins that pass our cuts the good bins. The kinematic coverage of the LHeC, FCC-eh and EIC data sets is displayed in Fig. <ref>. The darker regions in these plots indicate the good regions. We next discuss the anticipated error budgets for these data sets. For the LHeC and FCC-eh, we use the error estimates provided in previous analyses <cit.>. The uncertainty components consist of uncorrelated statistical uncertainties (δσ_ stat), uncorrelated efficiency errors (δσ_ ueff), and correlated systematic errors (δσ_ sys). The correlated systematic uncertainties include contributions from lepton energy scale and polar angle measurements (δσ_ len and δσ_ lpol), the hadronic energy scale (δσ_ hen), radiative corrections (δσ_ rad), photoproduction backgrounds (δσ_ gam), a global efficiency factor (δσ_ geff), and luminosity (δσ_ lum). We assume the luminosity error to be 1% relative to the cross section. We introduce the systematics in a fully correlated manner. As for the EIC asymmetries, we have statistical uncertainties given by event counts, corrected for beam polarization and lepton beam luminosities: δ A_ PV, stat = 1 P_ℓ√(N), δΔ A_ PV, stat = P_ℓ P_H δ A_ PV, stat, where P_ℓ is the assumed lepton beam polarization at the EIC and P_H is the assumed proton/deuteron polarization. The assumed systematic errors δ A_ PV, sys are mainly due to particle background and other imperfections in measurements. They are assumed to be uncorrelated and 1% relative to the asymmetry. We assume uncertainties in lepton (hadron) beam polarization, δ(Δ)A_ pol, to be fully correlated and 1% (2%) relative in asymmetry. More discussion on the anticipated experimental uncertainties at the EIC is given in <cit.>. Additionally, for all data sets we take into account PDF errors fully correlated between bins, δσ_ pdf and δ A_ PV, pdf, respectively. We summarize the expected uncertainties for both the EIC, FCC-eh and LHeC in Table <ref> below. In Appendix <ref> we discuss how these systematic uncertainties are incorporated into the error matrix for our analysis. We also give details of our pseudodata generation and describe our statistical procedure for deriving Wilson coefficients bounds in Appendix <ref>. In Figs. <ref> and <ref> we present the aforementioned uncertainty components at the LHeC, FCC-eh and EIC for representative data sets. On the horizontal axis, we order the bins of the indicated data sets. On the vertical axes are the central values of the observables and the uncertainty components that go into the diagonal entries of the error matrix. The central values of cross sections and asymmetries are denoted by the black lines, statistical uncertainties by red, systematics by magenta, and PDF errors by orange. The blue line for the LHeC indicates uncorrelated efficiency errors, whereas the cyan lines for P4 and Δ P4 denote beam polarization errors. The bins are sorted first by low to high Q and then by low to high x. This explains the observed sawtooth behavior. From Figs. <ref> and <ref> we can make the following points. * Statistical uncertainties are smaller for the LHeC and FCC-eh runs for the majority of the bins compared to the EIC. They constitute only a negligible part of the total uncertainties for these machines. Measurements at the LHeC and FCC-eh will be systematics limited. We note that the PDF error constitutes a non-negligible fraction of the error budget, indicating the need to eventually consider a joint fit of PDFs and Wilson coefficients <cit.>. * The statistical uncertainties are the leading error for the unpolarized PV asymmetries at the EIC, and are almost an order of magnitude larger than other sources of errors. For the polarized PV asymmetries, statistical uncertainties are smaller than the PDF errors. * Other uncorrelated uncertainties originating from efficiency errors compete with the systematics and PDF errors at the LHeC. Systematic uncertainties dominate for most of the FCC-eh bins. * Correlated uncertainties at the LHeC account for the largest source of errors for the majority of the bins used in our analysis. At the EIC, for the PV asymmetries, the only correlated uncertainty comes from beam polarization, and it is a small part of the total uncertainty. § SMEFT FIT RESULTS We discuss here our numerical results. For our input parameters we use an electroweak scheme with G_F, α, and M_Z as our inputs. The numerical values for the parameters used in our analysis are as follows: G_F = 1.1663787 × 10^-5 ^-2 α^-1 = 137.036 M_Z = 91.1876  . We assume a lepton beam polarization at the EIC of P_ℓ = 80%, and a hadron beam polarization of P_H=70%. The assumed polarizations for the various LHeC and FCC-eh runs are given in Table <ref>. For our UV scale we take Λ = 1. We use NNPDF3.1 NLO PDFs <cit.> for the unpolarized cross sections and NNPDF 1.1 NLO polarized PDFs <cit.>. The 2-loop running strong coupling constant is numerically evaluated according to the renormalization group equation μ_R^2 α̣_̣ṣμ̣_̣Ṛ^̣2̣ = β(α_s) = -(b_0 α_s^2 + b_1 α_s^3) where b_0 = 33-2N_f 12π and b_1 = 153 - 19N_f 24π^2 with the initial condition α_s(M_Z^2) = 0.1185. We set μ_R^2 = Q^2 and take N_f = 5 since we impose the cut Q>10 GeV on our data. §.§ Bounds on semi-leptonic four-fermion operators We begin by activating only the seven semi-leptonic four-fermion operators. Previous studies have shown that the Drell-Yan process at the LHC, the natural channel to probe these operators due to its energy reach and excellent measurement precision, has difficulty probing certain linear combinations of Wilson coefficient in this subspace <cit.>. Future DIS experiments can help resolve these degeneracies <cit.>. Restricting ourselves to this subspace of Wilson coefficients allows us to compare the potential of DIS measurements at the EIC, FCC-eh and LHeC to improve upon Drell-Yan measurements at the LHC. The marginalized 95% confidence level (CL) bounds on the semi-leptonic four-fermion Wilson coefficients projected from the full seven-parameter (7d) fit and the corresponding effective UV scales are presented in Table <ref>. We also present the bounds obtained by activating only single operators for comparison. We consider several different fit scenarios in this table: fits of the separate EIC data sets P4, ΔP4, the combined EIC fit of D4, ΔD4, P4, and ΔP4, the individual LHeC runs, a joint LHeC fit, the individual FCC-eh fits, and a joint FCC-eh fit. We can make the following points from this table. * There are significant differences between the marginalized and non-marginalized bounds in fits to individual data sets. When we activate the entire sector of four-fermion operators we observe strong correlations among Wilson coefficients, leading to degeneracies. However, when we combine the different run scenarios at a given machine the flat directions in the respective individual fits are removed. The effective scales probed in the fully marginalized joint fits range from 500 GeV to 1 TeV at the EIC, from 2.5 to 14 TeV at the LHeC, and from 2.0 to 18 TeV at the FCC-eh, depending on the Wilson coefficient being considered. * The polarized PV asymmetries at the EIC play an important role in the fully marginalized joint fit, even though they typically lead to weaker constraints on individual coefficients. * No single LHeC or FCC-eh data set can provide strong probes of all the four-fermion semi-leptonic Wilson coefficients. This is not surprising since the different runs each utilize distinct lepton helicities and species, and since these seven Wilson coefficients characterize the strength of lepton-quark contact interactions for different helicity states. The full spectra of proposed run scenarios at both the LHeC and FCC-eh are needed to fully explore the allowed parameter space. These possibilities are represented in the figure by the joint LHeC and FCC-eh bounds. * The joint LHeC data set imposes significantly stronger bounds on semi-leptonic four-fermion Wilson coefficients than the EIC. This is also not surprising, given its higher momentum transfers where SMEFT-induced deviations are expected to be larger. For the majority of operators the joint FCC-eh fit imposes stronger constraints than the joint LHeC fit. * e^-p collisions with RH electrons (LHeC3, LHeC6 and FCCeh2) provide the optimal configurations to constrain and . * The highest-luminosity e^-p collisions with LH electrons (LHeC2, LHeC5 and FCCeh1) are the optimal configuration to constrain and . * Polarized e^+p collisions (LHeC4) yield the optimal configuration to constrain and . This is an interesting result that shows the physics gain resulting from a positron beam at a future LHeC. It arises from the structure of the underlying matrix elements. We discuss this point in more detail later. * e^+p collisions with unpolarized positrons (LHeC7 and FCCeh3) serve as the optimal configuration to constrain . The effective UV scales presented in Table <ref> are defined as Λ/√(C_k) for each Wilson coefficient C_k. We note that the convergence of the EFT expansion is controlled by the ratio C_k Q^2/Λ^2, where Q denotes the DIS momentum transfer. The effective scale constraints obtained above indicate that this ratio is significantly less than unity for all runs considered. This supports our truncation of the expansion at dimension-6, as well as our linearization of the dimension-6 SMEFT effects. In Fig. <ref>, we present representative confidence ellipses projected from the 7d fit of the four-fermion Wilson coefficients. In order to emphasize the changes in higher-dimensional fits as more Wilson coefficients are activated, we also include ellipses where only two Wilson coefficients are activated at a time. We show the results for the strongest LHeC, FCC-eh and EIC data sets, as well as the joint fits, for the shown pairs of Wilson coefficients. We present zoomed-in ellipses of the joint FCC-eh and LHeC fits for clarity. Flat directions not present in the 2d fits emerge when all several Wilson coefficients are activated, significantly weakening the bounds obtained from individual run scenarios. However, they are ameliorated in the joint fits, and in particular the LHeC and FCC-eh joint fits show very similar constraints in both the marginalized and non-marginalized cases. The joint EIC constraint ellipse remains weaker in the joint fit, although it is much stronger than the individual P4 fit, indicating the need to consider multiple run scenarios at the EIC. Referring to Table <ref> we note that there are three parameters in the simulations that can be varied: luminosity, lepton beam polarization, and lepton species. We further investigate the physics impact of varying these parameters. For simplicity we will focus this study on the LHeC, although the conclusions hold for the FCC-eh as well. We can compare LHeC2 to LHeC5, and LHeC3 to LHeC6, to check the importance of integrated luminosity. We can also compare LHeC3 and LHeC4 to understand the consequences of having different lepton species. In Fig. <ref>, we present plots that compare the impact of increasing the integrated luminosity. Increasing the luminosity (going from LHeC2 to LHeC5 or LHeC3 to LHeC6) only slightly improves the estimated bounds. As shown in Fig. <ref> systematic uncertainties dominate over statistical uncertainties at both the LHeC and the FCC-eh, and must be brought under control to facilitate high-luminosity BSM analyses. Fig. <ref> demonstrates that significant improvements occur when we change the lepton species from electrons to positrons. This is despite LHeC4, with positrons, having three times less luminosity than LHeC3 with electrons, as we recall from Table <ref>. The reason is that the y-dependence of the various Wilson coefficient structures in the matrix elements changes when we switch from electrons to positrons. We note that replacing an electron with a positron amounts to the following interchanges in the matrix elements: by , by , and ∓ by . Referring to Eq. (7) in Ref. <cit.>, we see that these replacements remove the (1-y)^2 factors multiplying the Wilson coefficients. These factors reduce the cross sections for the electron case, since on average (1-y)^2 ∼ 1/4. Removing them leads to larger corrections from the SMEFT for the positron-induced cross sections. This result demonstrates the usefulness of positron runs in the future DIS program. Finally, it is known that the LHC has blind spots in the Wilson coefficient parameter space due to the observables measured in the Drell-Yan process, and that measurements of the DIS process can help remove these degeneracies <cit.>. It is possible that the DIS measurements themselves exhibit approximate degeneracies as well. We study that issue here, again focusing on the LHeC for illustrative purposes. Looking at the leading-order matrix elements, we see that there are two kinematic structures, a term proportional to (1-y)^2 and a term without y dependence. We must separately set these to zero, for both the up-quark and down-quark channels. Doing so leads to four conditions on the seven semi-leptonic four-fermion Wilson coefficients. We choose to keep (, , ) as our basis. This leads to the following conditions on the other four Wilson coefficients in order to have all SMEFT-induced corrections to e^-p scattering vanish: = P_ℓ - 1 P_ℓ + 1Q_u - g_+^e g_+^u η̂_γ Z Q_u - g_-^e g_-^u η̂_γ Z ( - ) = P_ℓ + 1 P_ℓ - 1Q_u - g_-^e g_+^u η̂_γ Z Q_u - g_+^e g_-^u η̂_γ Z = P_ℓ - 1 P_ℓ + 1Q_d - g_+^e g_+^d η̂_γ Z Q_d - g_-^e g_-^d η̂_γ Z ( + ) = P_ℓ + 1 P_ℓ - 1Q_d - g_-^e g_+^d η̂_γ Z Q_d - g_+^e g_-^d η̂_γ Z. We note that this also removes SMEFT corrections to e^+p scattering upon taking P_ℓ→ -P_ℓ. Here, Q_u/d is the up/down quark electric charge, g_±^f = g_V^f ± g_A^f, g_V/A^f are the usual SM vector/axial fermion couplings to the Z boson, and the energy-dependent η factor is defined by η̂_γ Z = G_F M_Z^2 2 √(2)παQ^2 Q^2 + M_Z^2. Before presenting results we first discuss several caveats associated with these solutions. First, due to the presence of the energy-dependent η factors, any flat direction can only be approximate. As noted in <cit.> these degeneracies become more apparent at high energies and momentum transfers. Since Q^2 reaches up to 1 TeV at the LHeC, and consequently Q^2 ≫ M_Z^2, we expect them to become important at this experiment. Second, since the solutions above depend on the lepton polarization, a clear path to removing any degeneracy is clear: run the LHeC and FCC-eh with multiple polarization scenarios. This again illustrates the importance of running with multiple run scenarios as outlined in Table <ref>. We also note that this example is a bottom-up construction of a flat direction only, and we make no attempt to connect this to an ultraviolet model. Setting P_ℓ = -80%, we can study the approximate flat directions that appear in the fits of LHeC2, LHeC4, and LHeC5 as a representative example. Letting Q^2/M_Z^2 →∞, Eqs.(<ref>) through (<ref>) give ≈ -13 ( - ) ≡^(1) ≈ -0.052 ≡^(1) ≈ -22 ( + ) ≡^(1) ≈ 0.12 ≡^(1) . We now impose these relations and perform fits in the 3-d parameter space of of , , and . In Fig. <ref>, we present effective UV scales derived from the marginalized 95% CL bounds on , , and . This figure shows that the reaches of the LHeC2, LHeC4, and LHeC5 runs become weak, as expected. The joint LHeC fit can, however, cover this region of parameter space, as can the EIC. This explicitly demonstrates the importance of running future DIS experiments with multiple parameter scenarios. §.§ Bounds on ffV vertex corrections We now activate all 17 Wilson coefficients listed in Table <ref>. These include both four-fermion interactions and operators which shift the ffV vertices. One typically expects corrections to the ffV vertices to be better constrained by precision Z-pole observables. Indeed, fits with only a single Wilson coefficient activated result in extremely strong bounds, reaching 10 TeV in some cases <cit.>. However, due to the limited number of measurements possible there are numerous degeneracies in this parameter space. This is nicely illustrated in <cit.>, where bounds on ffV vertex corrections are loosened by roughly an order of magnitude when switching from single-coefficient fits to results where the other Wilson coefficients are marginalized over. For example, the bound on the effective UV scale associated with the coefficient C_ϕ WB decreases from approximately 15 TeV to 1 TeV when all coefficients are turned on (see <cit.>, Fig. 3). Other possibilities for probing these couplings include top, Higgs and diboson data at the LHC, which are also considered in <cit.>, and on-shell Z-boson production at the LHC <cit.>. We consider here the potential of future DIS experiments to probe this sector of the SMEFT. We present in Table <ref> the marginalized 95% CL bounds on Wilson coefficients coming from the full 17d fit. We show results for joint EIC fit of D4, ΔD4, P4 and ΔP4, as well as the joint LHeC and FCC-eh constraints. In addition we show the results from the 34d fit of Z and W observables and of EW, diboson, Higgs, and top data, adapted from <cit.>. To convert the results of <cit.> to our notation we take the individual non-marginalized 95% CL bounds, symmetrize them, form pairwise covariance matrices using the given correlations, and then re-derive marginalized 2-parameter fits at 95% CL. The correlation matrix for our joint LHeC fit is shown in Fig. <ref>, and the joint FCC-eh fit is shown in Fig. <ref>. We caution that because of the different numbers of parameters fitted in <cit.>, this is not quite an apples-to-apples comparison between the two fits. * The LHeC bounds are stronger than those from the joint fit of electroweak precision data and LHC results for the majority of Wilson coefficients, indicating that it would add constraining power to the global fit. The FCC-eh are stronger than both the LHeC and EIC in most cases. * From the correlation matrix we observe that there is only weak correlation between the vertex corrections and the four-fermion operators in the joint LHeC and FCC-eh fits. * The bounds from the EIC reach 500 GeV for the effective scale at most, and are weaker than those obtained from the LHeC and in <cit.>. To study this further and to see what including the future precision DIS data in the existing global fit may lead to, we consider several representative 2-d projections of our results. In Figs. <ref> and <ref>, we present non-marginalized 95% CL ellipses in the parameter subspace spanned by (C_φ D, C_φ q^(3)), (C_φ u, C_φ e), (C_φ WB, C_φℓ^(1)), and (C_φ WB, C_φ u). We consider the joint fits from each DIS experiment, as well as the EWPO fits adapted from <cit.>. We can make the following points from these representative 2d projections. * The potential LHeC probes are in most cases stronger than those of the joint electroweak and LHC fit, and the FCC-eh bounds are stronger still. In particular the joint electroweak and LHC fit exhibits strong correlations between parameters that results in elongated ellipses in several of the 2d projections that we consider, as illustrated by the pairs (C_φ D, C_φ q^(3)) and (C_φ WB, C_φℓ^(1)). The combinations of future LHeC and FCC-eh runs do not show these correlations, and can remove these approximate degeneracies in the joint electroweak and LHC fit. * The EIC probes are far weaker than those obtained from the other fits, and do not contribute significantly to probing the ffV parameter space. § CONCLUSIONS In this work we have studied the BSM potential of the LHeC, FCC-eh and EIC within the SMEFT framework. Following previous studies in the literature the observables considered are the NC DIS cross section at the LHeC and FCC-eh, and parity-violating asymmetries at the EIC. We considered the full spectrum of SMEFT operators that can shift the DIS cross section, including both semi-leptonic four-fermion operators and ffV vertex corrections. This leads us to a 17-dimensional Wilson coefficient parameter space. We considered numerous experimental configurations for these machines, and various energy, polarization and lepton species in order to determine their impact on probes of SMEFT. We have found that the EIC can probe UV scales up to 3 TeV. This increases to 13 TeV for individual LHeC runs, 14 TeV with the joint LHeC run, and as high as 18 TeV in the joint FCC-eh fit. No single run scenario at any experiment is ideal for probing the full SMEFT parameter space, and for the purpose of BSM studies it will be important to vary polarization and lepton species. Most importantly, we have found that future precision DIS measurements can lift degeneracies present in the precision electroweak fit to Z-pole observables. Constraints from the LHeC and FCC-eh are estimated to be in most cases stronger than those coming from combined fits of Z-pole and LHC data. Our results further demonstrate the BSM potential of future DIS studies. Acknowledgments: We thank D. Britzger for suggesting to include an analysis of the FCC-eh capabilities. C. B. and R. B. are supported by the DOE contract DE-AC02-06CH11357. K. S. is supported by the DOE grant DE-FG02-91ER40684. This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology. § DETAILS OF THE EXPERIMENTAL ERROR MATRIX AND PSEUDODATA GENERATION §.§ Construction of the error matrix We discuss here the structure of the error matrix. The experimental error matrix is defined by E_ exp, bb' = (δ Q_ unc, b⊕δ Q_ cor, b)^2, b = b' ρ_bb' δ Q_ cor, b δ Q_ cor, b', b ≠ b' with b,b' = (N_ bin), where b and b' are the bin indices, N_ bin is the number of bins, Q = σ_ NC or (Δ)A_ PV is the observable, and δ Q_ unc, b and δ Q_ cor, b are the uncorrelated and correlated errors summed in quadrature for the b^ th bin. We define δ Q_1 ⊕δ Q_2 ⊕⋯ = √(δ Q_1^2 + δ Q_2^2 + ⋯) as a shorthand notation. For the correlated errors we assume full correlation between bins: ρ_bb' = 1. For the LHeC and FCC-eh data sets, we have δσ_ unc, b = δσ_ stat, b⊕δσ_ ueff, b , δσ_ cor, b = δσ_ sys, b , with δσ_ sys, b = δσ_ len, b⊕δσ_ lpol, b⊕δσ_ hen, b⊕δσ_ rad, b⊕δσ_ gam, b⊕δσ_ geff, b⊕δσ_ lum, b . The meaning of each individual systematic error in Eq. (<ref>) was discussed in Section <ref>. For the EIC data sets we have δ (Δ) A_ PV, unc, b = δ (Δ) A_ PV, stat, b⊕δ (Δ) A_ PV, sys, b δ (Δ) A_ PV, cor, b = δ (Δ) A_ PV, pol, b In addition to the experimental errors we must consider the PDF uncertainties. Potential uncertainties from uncalculated higher-order QCD corrections, typically estimated by varying renormalization and factorization scales, are smaller than the other sources of uncertainty and are neglected in our analysis. The PDF error matrix is defined by E_ pdf, bb' = 1 N_ pdf∑_m = 1^N_ pdf (Q_m, b - Q_0, b) (Q_m, b' - Q_0, b') where N_ pdf is the number of PDF members and Q_0(m), b is the SM prediction for the observable Q at the b^ th bin, evaluated with the central (m^ th) member of the relevant PDF set. The total error matrix is given by E = E_ exp + E_ pdf In our analysis, we also consider joint fits of various data sets. We assume that the PDF errors and all systematic uncertainties, except the photoproduction background, are correlated among runs. The joint error matrix is given by the individual error matrices of the runs on the block-diagonal entries, with error matrices of correlated uncertainties in the off-block diagonal entries given by J_nn' = J_ exp,nn' + J_ pdf,nn'. Here, n, n' are the run indices and J_ exp, nn', bb' = ρ_nn', bb' δ Q_ cor, n, b δ Q_ cor, n', b' J_ pdf, nn', bb' = 1 N_ pdf∑_m=1^N_ pdf (Q_n,m,b - Q_n,0,b) (σ_ NC, n',m,b' - Q_ n',0,b'). The index, b=1,…,N_ bin,n, N_ bin, n denotes the number of bins of the n^ th data set. δσ_ cor, n, b is given by Eq.(<ref>) after removing the photoproduction background error, δσ_ gam, b. Q_, n, 0(m), b is the observable evaluated with the central (m^ th) member of the PDF set in the b^ th bin of the n^ th run. The joint error matrix takes the form E = E_1 J_12 ⋯ J_17 E_2 ⋯ J_27 ⋱ ⋮ E_7 _ sym where E_n is the error matrix of the n^ th set given by Eq. (<ref>). §.§ Generation of the pseudodata Following the procedure of <cit.> we simulate numerous realizations of each LHeC, FCC-eh and EIC run, which we denote as pseudoexperiments. For each pseudoexperiment we define a χ^2 test function by χ_e^2 = ∑_b,b' = 1^ N_ bin (Q_b - Q_e,b) Ê^-1_bb' (Q_b' - Q_e,b') where Q_b is the SMEFT expression and Q_e, b is the simulated value for the observable Q in the b^ th bin. Here, Ê^-1 indicates the symmetrized inverse error matrix, Ê^-1 = 1/2 [E^-1 + (E^-1)^ T] For a given observable Q we define pseudoexperimental values according to Q_e,b = Q_b^ SM + r_e,b δ Q_ unc, b + ∑_j r_j,e' δ Q_ cor_j, b where Q_b^ SM is the SM prediction for the observable Q. r_e,b and r'_j,e are random variables picked from the unit normal distribution, namely r_e,b, r'_j,e∼𝒩(0, 1). δ Q_ unc,b is the total uncorrelated uncertainty, and δ Q_ cor_j, b is the j^ th correlated uncertainty. Note that each correlated error is introduced with a single random variable for each pseudoexperiment. §.§ Statistical treatment for Wilson coefficient bounds The χ^2 function for the joint LHeC and FCC-eh runs, which have 206 and 120 bins respectively, for a single pseudoexperiment has the form χ_e^2 (r⃗_e, r⃗'_e) = χ^2_ SM, e (r⃗_e, r⃗'_e) + ω⃗_e (r⃗_e, r⃗'_e) ·C⃗ + C⃗· M C⃗ where r⃗_e and r⃗'_e stand for all the random variables involved. The best-fit values of the Wilson coefficients, C̅⃗̅_e, are given by minimizing the χ^2_e function for each pseudoexperiment via .χ_e^2 C⃗|_C⃗ = C̅⃗̅_e = 0. The inverse covariance matrix of the fit is obtained from the second derivatives of the χ^2_e function as V^-1 = 1/2. ^2 χ_e^2 C⃗ C⃗|_C⃗ = C̅⃗̅_e = M. This is constant for all the pseudoexperiments, hence we have dropped the subscript e. The average of the best-fit values of Wilson coefficients across pseudoexperiments, C̅⃗̅, is given by C̅⃗̅ = ∑_e = 1^N_ exp V^-1^-1∑_e=1^N_ exp V^-1C̅⃗̅_̅e̅ = 1 N_ exp∑_e=1^N_ expC̅⃗̅_̅e̅ where N_ exp is the number of pseudoexperiments. Since the inverse covariance matrix is constant for all pseudoexperiments, we manage to avoid running a large number of pseudoexperiments, which saves a great deal of computational expense. We justify this approach by noting that the distribution of the best-fit values of each Wilson coefficient exhibits a Gaussian distribution around zero. We know that the average of the best-fit values of each Wilson coefficient is expected to be zero, which is in fact the case for large N_ exp. The marginalized bound for the Wilson coefficient C_k is [-Δ C_k, Δ C_k] where Δ C_k = √(Δχ^2(d,c) V^-1_kk). The confidence ellipse in the parameter subspace spanned by Wilson coefficients C_k and C_k' is described by C_k C_k' V^-1_kk'C_k C_k' = Δχ^2(d,c) where Δχ^2(d,c) is the quantile of the χ^2 distribution for d fitted parameters at confidence level c. Here, V^-1_kk is the inverse of the kk entry of the covariance matrix, and V^-1_kk' is the inverse of the covariance matrix after removing all the rows (columns) other than the kth (k'th) ones.
http://arxiv.org/abs/2306.07430v2
20230612211659
The Progenitors of Superluminous Type Ia Supernovae
[ "Margot Fitz Axen", "Peter Nugent" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.CO", "astro-ph.HE" ]
Margot Fitz Axen [email protected] 0000-0001-7220-5193]Margot Fitz Axen Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, Texas 78712-1205, USA. 0000-0002-3389-0586]Peter Nugent Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA. Department of Astronomy, University of California, Berkeley, CA, 94720, USA. Recent observations of type Ia supernovae (SNe Ia) have discovered a subclass of `super-Chandrasekhar’ SNe Ia (SC SNe Ia) whose high luminosities and low ejecta velocities suggest that they originate from the explosions of white dwarfs (WDs) with masses that exceed the Chandrasekhar mass limit. Different models have been proposed to explain the progenitors of these explosions, including a ‘magnetized WD’ model and a ‘WD merger’ model. To test the robustness of these models, we conduct a 1D numerical parameter survey of WD explosions using these models as initial conditions. We follow the explosions using the hydrodynamics code Castro and then use the radiation transport code SuperNu to create light curves and spectra for the models. We find that while both classes of models fall within the range of SC SNe Ia observations on the light curve width-luminosity relation, only the WD merger models reproduce the observed low ejecta velocities. The light curves of our merger models are more similar photometrically to observations than our magnetized models. Given this, we discuss possible explanations for the brightest SC SNe Ia observations that cannot be reproduced with our WD merger models. This study provides the basis for future SC SNe Ia observations and higher-dimensional numerical models. § INTRODUCTION Type Ia supernovae (SNe Ia) are used as cosmological ‘standard candles’ due to the homogenity of their light curves, which follow a characteristic relationship between their peak luminosity and width <cit.>. Observations of high-redshift SNe Ia prooved the accelerating expansion of the universe and were used to make the first measurements of the cosmological constant <cit.>. However, in recent decades, many subtypes of SNe Ia have been discovered that do not follow the standard width-luminosity relation and must be excluded from cosmological surveys. These include 1991bg-like SNe <cit.>, 2002cx-like SNe <cit.>, 1991T-like SNe <cit.>, and 2003fg-like SNe <cit.>. These SNe subtypes differ spectroscopically and photometrically from Branch-normal SNe Ia <cit.>. One of the rarest subtypes is 2003fg-like SNe, which are often dubbed “super-Chandrasekhar" SNe Ia (hereafter SC SNe Ia). They have slowly declining light curves (Δ m_15 (B) < 1.3 mag) and are exceptionally luminous, with peak absolute B-band magnitudes of -19 < M_B < -21 mag <cit.>. Spectroscopically, they exhibit strong CII lines a few days after explosion and weaker SiII and FeIII lines than normal SNe Ia <cit.>. Their linewidths suggest unusually low ejecta velocities <cit.>. Modeling these SNe analogously to normal SNe suggests that many of these SNe originiate from WDs that exceed the Chandrasekhar mass (M_ch). All SNe Ia originate from the explosion of an unstable carbon-oxygen (CO) white dwarf (WD) accreting mass in a binary system <cit.>, but the exact nature of their progenitor remains unknown. They may exist in a `single degenerate' (SD) system in which their companion star is a normal star, or a `double degenerate' (DD) system in which their companion star is another WD <cit.>. Both of these progenitor classes may be able to form super-Chandrasekhar mass systems that would explain the 2003fg-like observations. In the single-degenerate case, the WD properties must enable it to remain stable at masses greater than M_ch. This may be possible if the WD is highly magnetized, rotating, or both, as we describe below. Observations from the Sloan Digital Sky Survey (SDSS) shows that ≈ 10 % of WDs are magnetized, with surface magnetic fields in the range 10^4-10^9 G, <cit.> and suggests that their masses are higher than their non-magnetized counterparts <cit.>. The internal magnetic fields of WDs are not known, but they are expected to be higher than their surface magnetic fields, and may reach ≈ 10^14 G in their center <cit.>. With these magnetic fields, the WD may reach a maximum mass of ≈ 2.0 solar masses or greater <cit.>. Additionally, rotation due to rapid accretion can provide a similar opportunity for WDs to remain stable at masses greater than M_ch <cit.>. While these WDs have been proposed as potential SC SNe Ia progenitors, they have not been studied in numerical SNe Ia simulations. In the double degenerate case, a system of merging WDs that has a total mass greater than M_ch may also produce SC SNe Ia. The system may explode violently soon after contact <cit.> or quiescently after the secondary has been completely disrupted <cit.>. Studies have shown that these WD merger systems may be the progenitors of normal and subluminous SNe Ia; however, they have not been shown to be potential progenitors of superluminous SNe Ia to date. This paper presents a survey of super-Chandrasekhar mass explosion models and explores their observational relevance to 2003fg-like SNe. We explore models of both highly-magnetized WD models and WD merger models using calculations and simulation results from previous papers. In Section <ref> we describe our WD models and numerical methods. In Section <ref> we describe the results of our simulations, focusing on the differences between the different models in both the hydrodynamic properties of the explosions and the observable spectroscopic and photometric output. We discuss our results and summarize our conclusions in Section <ref>. § NUMERICAL METHODS We model the explosion in several stages. First, the initial model is built by starting with a WD progenitor model of either a WD merger system or highly magnetized WD. After the initial model is constructed, a detonation is initiated in the center of the star. The resulting thermonuclear hydrodynamics are then evolved in 1D. Each model is evolved until the ejecta reaches homologous expansion, after which we perform radiative transport calculations to produce light curves and spectra for each model. §.§ Hydrodynamics and Nuclear Processes The compressible Eulerian hydrodynamics code Castro is used to follow the progression of the explosions <cit.>. We enable Castro's built in monopole gravity and the Helmholtz equation of state <cit.>. A 13 isotope α-chain nuclear network is used to monitor the nuclear reactions <cit.>. The network includes (α, γ) and (γ, α) reactions for 13 isotopes: ^4He, ^12C, ^16O, ^20Ne, ^24Mg, ^28Si, ^32S, ^36Ar, ^40Ca, ^44Ti, ^48Cr, ^52Fe, and ^56Ni. Additionally, we perform a calculation with a 21 isotope reaction network which includes all of the isotopes in the 13 reaction network as well as ^1H, ^3He, ^14N, ^56Cr, ^54Fe, and ^56Fe. For this test, we use one of the lowest density merger models, MG051018, as the lower density models produce less heavy elements and are more likely to be impacted by the choice of reaction network. In this example, the larger network did not yield significantly different results and it was determined that the 13 isotope network was sufficient for the scope of this study. Castro allows for adaptive mesh refinement (AMR), which enables the increase of resolution in areas of interest in the simulation. However, we find that performance in the code is best on GPUS if running without AMR and instead starting with a high number of grid cells. For all of our models, we use   10^5 grid cells over a range of 10^5 km, or about 1 km/cell. We perform a resolution test to verify that this resolution is sufficient using the MAG1010 magnetized model, and found that increasing the number of grid points did not impact the results of the calculation. §.§ WD Models The initial models were created using a different process for the WD mergers and the magnetized WDs. For the WD merger models, we construct an isothermal WD as the primary accretor and then add mass around it to model the accretion of the secondary WD. For the magnetized WD models, we construct a single WD with a modified EOS to account for the magnetic field. For both models, the composition of the WD(s) is 50 % carbon and 50 % oxygen. §.§.§ Initial Models: WD Merger Models To construct a 1D model for a WD merger we follow a two step process to compute the stucture of the primary star and then the secondary star being accreted around it. We choose as input the central density ρ_c of the primary star and integrate the equations of stellar structure from the center to the surface of the star, which we take to be ρ_s=10^-4 g/cm^3. We use a semi-relativistic equation of state which interpolates between the non-reletevistic pressure P_deg, nr and reletevistic pressure P_deg, r as <cit.> P_ deg^-2 = P_deg, nr^-2+ P_deg, r^-2. Here the two limits are P_deg, nr = K_ nrρ^4/3 and P_deg, r = K_rρ^5/3 where K_ nr and K_r are polytropic constants in the non-relativistic and relativistic limits respectively. For the structure of the star, we integrate the Tolman-Oppenheimer-Volkoff equation <cit.>: dρ/dr=-G[ρ+P/c^2][m(r)+4π r^3P/c^2]/[r^2-2Gm(r)r/c^2](dP/dρ), and the equation of hydrostatic equilibrium: dm/dr=4π r^2ρ. This integration gives a relationship between the mass and the radius of the primary that is approximately equal to that derived by Chandrasekhar <cit.>: R_P ≈ 9 × 10^3 M_P/M_ ^-1/3√(1-M_P/M_ch^4/3) km, although with negligible differences due to our choice of ρ_s=10^-4 g/cm^3. Once we have constructed the models for the primaries, we construct the model for the secondary WD by assuming that the merging disrupts the outer layers of the primary and creates an outer medium at approximately constant density ρ_ add. We remove the outer profile of the primary where ρ<ρ_ add and instead add mass onto the primary at ρ=ρ_ add until the total mass of the configuration is equal to a total mass M_ tot=M_P+M_SD, where M_SD is the mass of the secondary star. We keep a constant pressure equal to the pressure of the primary at ρ=ρ_ add throughout M_SD. Note that because ρ_ add is orders of magnitude lower than ρ_c, this process removes a negligible amount of mass (< 1 %) from the primary. The resulting mass-radius profile for one of our models is shown in Figure <ref> by the solid blue line. This parameter survey varies the central density of the primary ρ_c (effectively varying the primary mass), the total mass of the configuration M_ tot and the density of the added secondary material ρ_ add. Varying the density of the accreted material is intended to explore the difference between very concentrated accreted material vs more diffuse. We only consider models in which M_P and M_SD are both in the range of 0.8-1.2 M_, at which both stars are expected to be CO WDs <cit.>. Additionally, we have the constraints M_ SD<M_P and M_ tot > M_ ch. Our full parameter space of merger models is shown in Table <ref>. §.§.§ Initial Models: Highly Magnetized WDs To construct the models for the magnetized WDs, we follow the model in <cit.>. The addition of a magnetic field modifies the EOS of the star. We use the magnetic field model used in <cit.> which has been used extensively to model magnetized neutron stars and WDs <cit.>: Bρ/ρ_0=B_s+B_01- exp-ηρ/ρ_0^γ. Here B_s is the surface magnetic field, B_0 is a fiducial magnetic field, and η and γ are dimensionless parameters that determine how the magnetic field changes from the core to the surface. For our calculations here, we set ρ_0=10^9 g cm^-3, η=0.8, and γ=0.9 for all calculations, following <cit.>. We set the surface magnetic field to B_s=10^7 G as this parameter has negligible effect on the profile at high WD masses <cit.>. The profile in Equation <ref> indicates the magnitude of the magnetic field at various density points throughout the star and hence radial coordinates. Similarly to the method we use to construct the primaries for the WD merger models, we integrate outward from the center of the star at ρ=ρ_c until ρ=ρ_s=10^-4 g cm^-3. We integrate Equations <ref> and <ref> with a modified EOS where P = P_ deg+P_B and ρ = ρ_ mat+ρ_B. Here P_B=B^2/(8 π) is the magnetic pressure and ρ_B = B^2/(8π c^2) is the magnetic density at the appropriate position in the star, with the magnetic field determined using Equation <ref>. Figure <ref> shows the resulting mass-radius relation for several different values of B_0 for comparison to the Chandrasekhar result. Figure <ref> shows the resulting mass-radius profile for one of our magnetized models (green line) along with an unmagnetized WD of the same central density for comparison. This parameter survey varies the central density ρ_c and the central magnetic field B_0. We explore several values for the central density using B_0= 10^14 G. At higher values for the central magnetic field, the WD is expected to be non-spherical, making it insufficient to be tested with our 1D models <cit.>. We also test a couple models at a fixed central density for lower values of B_0, at which the WD EOS is still significantly different than for the Chandrasekhar result. Our only constraint for our models is that M_*>M_ch. Our full parameter space of magnetic models tested is shown in Table <ref>. The stars in Figure <ref> are models that we test. §.§ Hydrodynamics and Radiation Transport After an initial model is constructed it is imported into Castro where the hydrodynamics calculations are performed. An ignition is triggered in the center of the star by heating a grid cell. Once nucleosynthesis is initiated, all artificial heating is turned off and the hydrodynamics are evolved through homology. After the SN ejecta reaches homologous expansion we use the SuperNu code <cit.> to create synthetic light curves and spectra for each model. SuperNu is a multi-dimensional time dependant radiation transport code that uses Monte Carlo methods to propagate photons. The calculations are performed under the assumption of local thermodynamic equilibrium (LTE) to determine ionization and excitation fractions in the ejecta. Energy is generated through the radioactive decay chain: ^56Ni →^56Co →^56Fe. § RESULTS §.§ Nucleosynthetic Yields and Kinetic Energy Figure <ref> shows the ejecta composition for the MG101018 merger model (right) and the MAG5010 magnetized model (left) shown in Figure <ref> when the burning has reached homologous expansion to highlight the general properties of our models. The magnetized model is composed almost entirely of ^56 Ni (yellow line), except at the outer parts of the star. In contrast, though it has a higher total mass than the magnetized model, the merger model has a lower central density and only burns part of the primary to ^56 Ni. The secondary material is partially burned to intermediate mass elements, but much of it remains as unburned ^12 C and ^16 O. Figure <ref> shows the elemental composition of these two models by the royal blue diamonds (merger model) and lime-green triangles (magnetized model). The magnetized model has a higher fraction of ^56 Ni and the higher mass elements ^44 Ti, ^48 Cr, and ^52 Fe, while the merger model has a higher fraction of all of the lower mass elements. A natural conclusion of the secondary material of the merger model burning incompletely is that the mass and density configuration of the secondary have no impact on the amount of ^56 Ni or higher mass elements. Therefore, all of our merger models with the same primary produce the same amount of ^56 Ni. The MG05yyzz models make 0.68 M_, while the MG10yyzz models make 0.92 M_ of ^56 Ni. This is further highlighted in Figure <ref>, which also shows the MG105018, MG100518, MG101022, and MG051018 models. The merger models have different amounts of intermediate mass elements, but the four merger models with the same primary have the same amount of ^56 Ni and high mass elements. Therefore, the three with the same secondary mass (models MG10yy18) have the same mass fraction of ^56 Ni. Figure <ref> shows the kinetic energy as a function of time for the MG10yy18 merger models and the MAG5010 magnetized model. For all models, the total energy is initially dominated by the binding energy of the WD, which is converted to kinetic energy as the WD ejecta is blown outward. However, the magnetized model kinetic energy increases smoothly, while the merger models are stalled by the dense material on the outside. As a result, the merger models reach lower kinetic energies than the magnetized model (even though they have a higher total mass) and take longer to reach homologous expansion. The kinetic energy profiles also differ between the different density configurations for the merger models. There is initially an inverse correlation between the density of added material and the kinetic energy of the ejecta, because a higher density secondary does not allow the ejecta to expand as quickly. However, as shown in Figure <ref>, this allows the primary to burn its ^12 C and ^16 O to higher amounts of ^28 Si - ^40 Ca because it is done at higher densities. The total nuclear energy of a SNe Ia can be approximated from its elemental composition as E_n ≈ 1.55 (M_ Ni/M_) + 1.18 (M_ IME/M_) <cit.>. The kinetic energy of the ejecta is E_KE≈ E_n-E_b, where E_b is the binding energy; and the three different models have the same binding energy because they have the same primary. Therefore, the extra nuclear energy of the higher density configurations results in a higher kinetic energy at homologous expansion. Figure <ref> shows the ejecta velocity at homologous expansion as a function of total model mass of all of the models run, v_KE=√(2 E_KE/M_ tot), along with the Si II velocities of three SC SNeIa at maximum light overplotted as shaded regions <cit.>. We show the ejecta velocity because it is strongly correlated with the linewidth velocities often inferred from observations. For the magnetized models, there is a positive correlation between the total model mass and both E_n and E_b; therefore, the models are all roughly the same kinetic energy at homologous expansion and the higher mass models have a lower ejecta velocity. This trend follows similarly for merger models with the same primary. However, given the same total mass for merger models, the velocity of the ejecta is higher for a higher primary mass, because the nuclear energy is higher. Overall, the merger models better match the low Si velocities inferred from the spectra of SC SNe Ia, which are often < 10000 km s^-1 at maximum light <cit.>. §.§ Light Curves We now turn to the photometric properties of our models computed by SuperNu. Throughout the next section we primarily show magnetized model MAG5010 as our fiducial magnetized model and model MG101018 as our fiducial merger model, along with four other merger model variations. We show 1) model MG051018, to highlight the differences between two models differing only in primary mass, 2) model MG101022 to highlight the differences between two models differing only in secondary mass, and finally 3) models MG100518 and MG105018 to highlight the differences between two models differing between the three different density profiles. We discuss these models to demonstrate the properties of our model light curves; however, their properties are generally applicable to our other models as well. To obtain the K-corrected observational data as well as pseudo bolometric lightcurves for all of the SC SNe Ia, we generated a template following the procedure in <cit.>. This template was constructed using the SNFactory data from SN 2012dn <cit.>. [The Super-Chandrasekhar spectroscopic template is available https://c3.lbl.gov/nugent/nugent_templates.htmlhere.] Figure <ref> shows the bolometric light curves for these six models, along with pseudo-bolometric light curves for three SC SNe Ia (generated via the template as described above.) The magnetized models such as MAG5010 all have very similar light curves, with peak absolute bolometric magnitudes between -19.5 and -20 mag. Their light curves dim quickly after maximum light, declining by  0.7-0.8 mag 15 days post maximum. The merger models show more variation. The peak brightness of their light curves is primarily a function of the amount of ^56 Ni synthesized in the explosion <cit.>, and therefore their primary mass. The MG10yyzz models such as MG101018 fall between -19 and -19.3 mag, and the MG05yyzz models such as MG051018 all fall between -18.8 and -19 mag. Their light curve shapes, in contrast, are primarily a function of the secondary mass; models with a higher mass secondary have a broader light curve. For example, models MG101018 and MG101022 shown in Figure <ref> have Δ m15 values of 0.45 and 0.22 respectively. Finally, changing the density configuration of the secondary has no statistical difference on the light curve properties; Figure <ref> shows that all of the MG10yy18 models are visually identical. To compare our model light curves to observations, we plot the Phillips relation in the SDSS r-band in Figure <ref>, which shows the absolute r-band magnitude vs Δ m15 (r) for our models with SC SNe Ia overplotted. Though the Phillips relation is often plotted in the B-band, we chose to plot in the r-band because we found that the r-band had the smallest offsets in the peak magnitude and light curve width from the bolometric quantities and the lowest spread in the offsets between the different models. The observational data are taken from <cit.>. Due to the large variation in the light curve properties of observed SC SNe Ia, most of our models fall reasonably within the observational results on the Phillips relation. The magnetized models have brighter peak luminosities comparable to the brighter SC SNe Ia such as SN 2007if and SN LSQ14fmg. Their Δ m15 values are also relatively large but still fall within the observational data. The merger models have luminosities comparable to the dimmer SC SNe Ia such as SN 2012dn and SN 2009dc, and all of their Δ m15 values fall within the observational data. Noticeably, all of our models fall far closer to SC SNe Ia observations than to `normal' SNe Ia such as SN 2011fe (yellow-green circle). We next turn to the individual photometric bands of SC SNe Ia, which have properties that distinguish them from normal SNe Ia. In general, SC SNe Ia do not have a prominent secondary maximum in the i-band, which appears in the i-band of normal SNe Ia due to the recombination of iron-group elements in the ejecta <cit.>. Figure <ref> shows the i-band light curves of SC SNe Ia observations (left) and our models (right). Our models have i-band luminosities comparable to SC SNe Ia observtaions, but the shape of the curves is similar to that of normal SNe Ia such as SN 2011fe (yellow-green circles). The magnetized models such as MAG5010 have especially prominent i-band secondary maxima, rising nearly to the value at peak. The merger models have a less defined, but still noticeable secondary i-band maximum. All of the models with the same secondary mass, such as the MGxxyy18 models shown in Figure <ref> have similarly shaped i-band light curves. The model with a higher secondary mass, model MG101022, has a slightly flatter light curve, but still does not match observations. Additionally, the (r-i) color curves of SC SNe Ia do not look like those of normal SNe Ia <cit.>. Generally, they do not reach such large negative values. Figure <ref> compares observations of SC SNe Ia (left) to our models (right). Our magnetized models such as MAG5010 are especially bright in the r-band compared to the i-band, so their |(r-i)| values are large. The merger models have smaller values of |(r-i)| which show minor variations between models varying secondary mass and density configuration. Figure <ref> shows that models MG101022 and MG105018 have smaller and larger values of |(r-i)| respectively than model MG101018, indicating that models with a higher secondary mass and lower density configuration than their counterparts fall closer to observations. However, they still do not match obesrvational results for most SC SNe Ia. §.§ Spectra Finally, we turn to the synthetic spectra produced for our models with SuperNu. Figure <ref> shows the spectra from the models shown in previous figures at pre-maximum light, maximum light, and post-maximum light, along with the SC SNeIa SN2007if and SN2012dn and the normal SNeIa SN2011fe at similar times. We note that all of our models have much deeper spectral lines than all of these observations, which is probably because they do not capture all of the physics of the interactions that occur in SC SNe Ia and non-LTE effects which may act to reduce the flux. However, we will still compare our model line depths to each other as it is a useful diagnostic tool for distinguishing SC SNe Ia. The left panel of Figure <ref> shows the spectra 10 days before maximum light. One of the easiest ways to distinguish SC SNe Ia from normal SNe Ia is by observing the spectra at this time, when they have comparatively weak, washed out features dominated by continuum and SiII absorption lines <cit.>. Pre-maximum light, the SC SNe Ia SN 2012dn and SN 2007if are almost featureless compared to SN 2011fe. At this time, for our models the spectral features are primarily determined by the density configuration of the secondary, with the primary mass and secondary mass having little effect. As can be seen in Figure <ref>, model MG105018 has more prominent features than model MG101018, which has more features than model MG100518. In this respect, the low density configuration models match observations best. The density configuration also determines the line location; the velocity of the ejecta is higher for a higher secondary density (Figure <ref>), and so the spectra are more redshifted. Finally, we note that the magnetized models such as MAG5010 cannot be easily distinguished from the high and fiducial density merger models at this time. At maximum light (middle panel of Figure <ref>), our merger models are more easily distinguished from our magnetized models. The SiII λ 6355 line is particularly prominent in our merger models at this time, where it appears saturated, often with a double minimum. This is caused by the two different regions of Si in the star, which can be seen in the right panel of Figure <ref>; that from the primary material and that from the secondary material, which are moving at different velocities. The magnetized models such as model MAG5010 have comparatively weak spectral features because they have lower amounts of intermediate mass elements than the merger models (Figure <ref>). Therefore, many of the transition lines which appear in the merger models such as the Si II λ 4130 and MgII λ 4481 do not appear in the spectra of model MAG5010 and our other magnetized models, and others such as the SiII λ 6355 are weaker. Finally, post-maximum light (right panel of Figure <ref>), both classes of models show transition features from FeII, FeIII, and CoIII. The merger models continue to show the saturated features at the SiII λ 6355 line, which is also evident in the CaII λ 8500 features. § DISCUSSION We have presented the results and analysis of a survey of 1D super-Chandrasekhar mass WD explosion models. To construct the models, we built density profiles for the WDs based off of two different observationally motivated predictions. The first class of models are WD merger models, constructed using a standard semi-relativistic polytrope model for the primary star and an additional layer of constant density material on top for the secondary. The second class of models are magnetized WD models, constructed using the model outlined in <cit.>. We modeled the thermonuclear explosion of the models using the hydrodynamics code Castro, and produced model light curves and spectra using the radiation transport code SuperNu. The following characteristics summarize their properties: 1. The amount of radioactive ^56 Ni produced in the explosions, which is the primary determinant of the SNe light curve brightness, is very different for the two classes of models even with relatively similar masses. The magnetized models burn almost entirely to ^56 Ni (left panel of Figure <ref>. The merger models do not burn any of their secondary to ^56 Ni because the density is too low (right panel of Figure <ref>); and consequently, the luminosities of their light curves do not depend on the properties of the secondary at all. 2. Both classes of models fall within the range of observations of SC SNe Ia on the Phillips relation (Figure <ref>). The WD merger models fall closer to lower luminosity SC SNe Ia such as SN 2012dn and SN ASASSN-15hy, while the magnetized models fall closer to brighter SC SNe Ia such as SN 2007if and SN LSQ14fmg. 3. The ejecta velocities of the merger models are comparable to those obtained from observations of SC SNe Ia spectroscopic linewidths, while the magnetized models have higher ejecta velocities comparable to normal SNe Ia observations (Figure <ref>). 4. Neither class of models reproduces the photometric properties that are the defining characteristics of SC SNe Ia, namely the lack of a double peak in the i band (Figure <ref>) and an (r-i) color curve that is less negative than normal SNe Ia (Figure <ref>). However, the merger models come closer, with slight variation based on the model properties. Given these remarks, we end with the conclusion that SC SNe Ia probably do not originate from the explosion of a magnetized WD without a companion. Despite thoroughly exploring the parameter space of magnetized WDs proposed by <cit.>, we were not able to construct a magnetized model of a single WD that reproduces the spectroscopic and photometric properties of SC SNe Ia observations; instead, those were more akin to very bright normal SNe Ia. The merger models come closer, and the reasons they do not match observations completely may be primarily due to drawbacks in our radiation transport, such as not modeling all reactions sufficiently. Additionally, SuperNu does not have non-LTE capabilities, which may change the observables if included. Non-LTE physics has never been explored for SC SNe Ia; however, as shown by <cit.>, including non-LTE in SNe Ia radiative transfer calculations can produce significant differences in the light curves and spectra. Given these considerations, a WD merger event alone may be sufficient to explain the lower luminosity SC SNe Ia such as SN 2012dn. However, the reader may wonder what the origin is of the very luminous SC SNe Ia such as SN 2007if and SN LSQ14fmg. After all, we thoroughly explored the parameter space of realistic WD merger models that satisfy the conditions necessary for a merger event and subsequent explosion, namely: 1) both WDs are CO WDs, 2) M_SD < M_P, and 3) M_ tot > M_ ch. We speculate these very bright measurements may be caused by a non-spherical explosion that increases the luminosity preferentially in one direction, which is not captured in our 1D models. Polarization measurements of SC SNe Ia such as those presented in <cit.> for normal SNe Ia will help determine whether this is the case. Another possibility is that these events are due to a WD merger event in which the primary (and possibly secondary) star is magnetized, and has M_P > M_ ch. This may be sufficient to produce enough ^56 Ni to match the brightest luminosity observations, while also preserving the spectroscopic and photometric properties of the merger models we studied. This avenue will be explored in future work. Funding for this research came from the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract no. DE-AC02-05CH1123. The National Energy Research Scientific Computing Center, a DOE Advanced Scientific Computing User Facility under the same contract, provided staff, computational resources, and data storage for this project. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fellowship under Award Number DE-SC0021110. We would also like to thank Kate Maguire, Georgios Dimitriadis, and Maxime Deckers for several useful conversations on more recent observational work in this field.
http://arxiv.org/abs/2306.02127v1
20230603145047
Finite-range effect in the two-dimensional density-induced BCS-BEC crossover
[ "Hikaru Sakakibara", "Hiroyuki Tajima", "Haozhao Liang" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.supr-con" ]
1,2]Hikaru Sakakibara 1,3]Hiroyuki Tajima 1,2]Haozhao Liang [1]Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan [2]Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako, Saitama 351-0198, Japan [3]RIKEN Nishina Center, Wako, Saitama 351-0198, Japan We theoretically investigate the Bardeen-Cooper-Schrieffer (BCS) to Bose-Einstein condensation (BEC) crossover in a two-dimensional Fermi gas with the finite-range interaction by using the Hartree-Fock-Bogoliubov theory. Expanding the scattering phase shift in terms of the scattering length and effective range, we discuss the effect of the finite-range interaction on the pairing and thermodynamic properties. By solving the gap equation and the number equation self-consistently, we numerically calculate the effective-range dependence of the pairing gap, chemical potential, and pair size throughout the BCS-BEC crossover. Our results would be useful for further understanding of low-dimensional many-body problems. xxxx, xxx Finite-range effect in the two-dimensional density-induced BCS-BEC crossover [ July 31, 2023 ============================================================================ § INTRODUCTION Strongly correlated quantum systems are essential in various contexts of modern physics. In a fermion system with a weak attractive interaction, it is known that the Bardeen-Cooper-Schrieffer (BCS) state is realized by the formation of Cooper pairs. If the strength of the attractive interaction becomes stronger, the BCS state turns into the Bose-Einstein condensate (BEC) of tightly bound molecules without any phase transitions <cit.>. This crossover phenomenon, nowadays called the BCS-BEC crossover, has been proposed originally in electron-hole systems <cit.>. After three decades, the BCS-BEC crossover is realized by cold atomic experiments of ^40K and ^6Li <cit.>. Such cold atom systems, in which the interaction strength can be arbitrarily tuned near the Feshbach resonance <cit.>, have attracted tremendous attention as ideal simulators for other quantum many-body systems, such as superconductors and nuclear matter <cit.>. Since the absolute value of the s-wave scattering length a can dramatically be enlarged near the Feshbach resonance, the interparticle interaction can be regarded as a contact-type (i.e., zero-range interaction) and characterized by one parameter a. Recently, the BCS-BEC crossover has been observed not only in cold atom systems but also in condensed matter systems <cit.>. In superconducting systems, by tuning the carrier density instead of the strength of interaction, the density-induced BCS-BEC crossover occurs <cit.>. This can be understood as the change of the interaction parameters through the Fermi momentum k_ F <cit.> [i.e., the dimensionless coupling measure (k_ Fa)^-1 in three dimensions]. Such a crossover in condensed matter systems is in contrast with that observed in cold atom systems, where a is tuned instead of k_ F. We note that the density-induced BCS-BEC crossover has been examined in lattice two-color quantum chromodynamics simulation <cit.>. Its three-body analogue has also been discussed <cit.>. However, in general the two-body interaction in condensed matter systems, such as superconductors and semiconductors, inevitably involves a finite effective range R in the s-wave channel. It is necessary to discuss how the finite-range interaction affects physical quantities in contrast to cold atom systems. It is reported that, in the superconducting BCS-BEC crossover, the pairing gap may show a peak structure in the carrier-density dependence, which is not found in cold atom systems <cit.>. The role of the finite-range interaction for the superconducting dome, that is, the peak structure of the superconducting critical temperature T_ c in the carrier-density dependence, has also been pointed out in the context of unconventional superconductors <cit.>. Moreover, in addition to the effective-range correction, the BCS-BEC-crossover superconductors are observed in a two-dimensional (2D) material <cit.>. In 2D systems, stronger correlations can be found compared to the 3D systems because of the reduction of kinetic degrees of freedom, as unconventional superconductors are more easily found in 2D materials than 3D ones. Remarkably, a two-body bound state can be formed even for an infinitesimally small attraction in 2D <cit.>. Such a bound-state formation plays a crucial role in the density-induced BCS-BEC crossover. In the previous works, the finite-range effects in 2D systems have not been explored systematically yet. While the quantum Monte Carlo simulation has been performed with the finite-range interaction, the finite-range dependence has been examined in only the small-range regimes (0≤ k_ FR≤ 0.11) <cit.>. The effect of the negative effective range has also been examined theoretically <cit.>. Furthermore, in Ref. <cit.>, the finite-range effect in the 2D superconductor system is considered by fitting to the experimental data but the Hartree-Fock (HF) self-energy contribution, which can be significant in the case with the finite-range interaction <cit.>, has been neglected. The present authors also studied the finite-range effect in the 2D BCS-BEC crossover by using the Brueckner G-matrix approach <cit.>. However, the effect of the pairing gap has not been taken into account in Ref. <cit.>. Systematical studies of finite-range effects will also be accessible in future cold atom experiments. By incorporating the additional process to excited states in the Feshbach resonance mechanism, the two-field optical method has been proposed to arbitrarily tune not only the scattering length but also the effective range <cit.>. Furthermore, a similar experiment for controlling the interaction spatially has been performed based on the above proposal <cit.>. In this paper, we theoretically investigate the effects of the positive effective range in an attractively interacting 2D Fermi gas system by using the Hartree-Fock-Bogoliubov (HFB) theory <cit.>. The HFB theory is useful to incorporate the finite-range effect and the presence of pairing gap self-consistently with relatively small numerical costs <cit.>. For the validity of the HFB theory, at least, the mean-field theory should be justified in the weak-coupling ground state corresponding to the BCS region <cit.>. Moreover, it is known that the mean-field theory can qualitatively describe the BCS-BEC crossover physics at zero temperature, as the information of the two-body bound state is correctly incorporated in the gap equation <cit.>. While the G-matrix study for the finite-range correction <cit.> does not involve the pairing gap, both the HF self-energy and the pairing gap can be determined self-consistently in the HFB theory. To understand the finite-range effect on the density-induced BCS-BEC crossover, we numerically calculate the pairing gap and chemical potential, which are directly affected by the effective range through the HF self-energy. To see the microscopic pairing properties, we also examine the pair-correlation length. This paper is organized as follows. In Sec. <ref>, we present the theoretical model for the BCS-BEC crossover with an attractive finite-range interaction in 2D. In Sec. <ref>, we show the numerical results and discuss how the finite-range correction affects the physical quantities such as pairing gap, chemical potential, and pair-correlation length. In Sec. <ref>, we summarize this paper. § MODEL In this section, we introduce the model for the 2D BCS-BEC crossover with the finite-range attractive interaction. A two-component 2D Fermi gas with the finite-range interaction is considered, where the Hamiltonian is given by H =∑_k,σξ_kc_kσ^† c_kσ +∑_k,k',P V(k,k') c_k+P/2,↑^† c_-k+P/2,↓^† c_-k'+P/2,↓ c_k'+P/2,↑. In Eq. (<ref>), ξ_k=k^2/(2m)-μ is the kinetic energy of a fermion with mass m measured from the chemical potential μ, and c_kσ^(†) is the annihilation (creation) operator of a fermion with the momentum k and the spin σ=↑,↓. We consider the finite-range separable s-wave interaction given by V(k,k')=GΓ_kΓ_k', where G is the coupling constant and Γ_k=1/√(1+(k/Λ)^2) is the form factor, which reproduces the relative momentum dependence of the scattering phase shift δ_k up to O(k^2) <cit.>. Since we are interested in the attractive interaction, the negative coupling constant G<0 is considered. The momentum scale Λ plays a role of the momentum cutoff. In the following, we show how to relate the model parameters (i.e., G and Λ) to the 2D scattering length and effective range via the analysis of the two-body T-matrix. In 2D systems, it is known that the two-body bound state always exists with arbitrary small attractive zero-range interaction. In the case of finite-range interaction, the T-matrix is written by T( k, k';ω) =GΓ_kΓ_k'[1-G∑_pΓ_p^2/ω_+-p^2/m]^-1, where ω_+ = ω + iδ is the two-body energy with an infinitesimally small number δ. The two-body binding energy E_ b is obtained from a pole of T-matrix as 0 = 4π/mG+log(mE_ b/Λ^2)/1-mE_ b/Λ^2. Also, the scattering length a is given by a=1/Λexp(-2π/mG). The ratio between the effective range R and a is given by R/a = √(-4π/mGexp(4π/mG)). For convenience, we measure the interaction strength and the effective range by using the dimensionless parameters log(k_ Fa) and R/a, where k_ F=√(2πρ) is the Fermi momentum. In the sense of cold atomic physics, log(k_ Fa) can be tuned by changing a near the Feshbach resonance. In the density-induced BCS-BEC crossover, k_ F and log(k_ Fa) are changed with the number density ρ. Qualitatively, the dilute BEC (strong-coupling) and dense BCS (weak-coupling) regimes are characterized as log(k_ Fa)≲ 1 and log(k_ Fa) 1, respectively. Next, the HFB theory is introduced to consider the many-body ground state in the presence of the nonzero effective range. To this end, two kinds of the mean-field expectation values are introduced: the pairing gap Δ(k)=-∑_k'V(k,k')⟨ c_-k',↓ c_k',↑⟩ and the HF self-energy Σ_σ(k)=∑_k'V(k-k'/2,k-k'/2)⟨ c_k',σ̅^† c_k',σ̅⟩, where σ̅ denotes the opposite spin with respect to σ. Since we are interested in the spin-balanced case, we suppress the spin index as Σ_↑(k)=Σ_↓(k)≡Σ(k). The resulting mean-field Hamiltonian reads H_ HFB = ∑_kΨ_k^†[ξ_kτ_3+Σ(k)τ_3-Δ(k)τ_1] Ψ_k +∑_kΔ(k) ⟨ c_k,↑^† c_-k,↓^†⟩ -∑_pΣ(p) ⟨ c_p,↑^† c_p,↑⟩ +∑_k[ξ_k+Σ(k)], where τ_i is the Pauli matrix acting on the Nambu spinor Ψ_k=(c_k,↑ c_-k,↓^†)^ T. After the Bogoiubov transformation, the ground-state energy is written by E_ GS = ∑_kΔ(k) ⟨ c_k,↑^† c_-k,↓^†⟩ -∑_pΣ(p) n_p+∑_k[ξ_k+Σ(k)-E_k], where n_p=1/2(1-ξ_p/E_p) is the momentum distribution and E_k=√({ξ_k+Σ(k)}^2+|Δ(k)|^2) is the quasiparticle dispersion. Moreover, the separable interaction leads to the convinient form of pairing gap as Δ(k)=-Γ_k G∑_k'Γ_k'⟨ c_-k',↓ c_k',↑⟩≡ΔΓ_k, where the superfluid order parameter Δ≡ - G∑_k'Γ_k'⟨ c_-k',↓ c_k',↑⟩ characterizes the magnitude of the pairing gap. Also, the HF self-energy reads Σ(k) =G∑_k'Γ_|k-k'|/2^2 n_k'. We note that the momentum dependence of Σ(k) is different from Ref. <cit.>. While the interaction Hamiltonian is directly replaced by the effective interaction in Ref <cit.>, here Eq. (<ref>) is derived microscopically under the mean-field approximation. However, this difference would not change the results qualitatively. In this way, E_ GS is further rewritten as E_ GS =∑_k[ξ_k+Σ(k)-E_k]-|Δ|^2/G -∑_pΣ(p) n_p. Minimizing E_ GS with respect to Δ, we obtain the gap equation 1=-G∑_kΓ_k^2/2E_k. To determine Δ and μ for a given number density ρ self-consistently, the gap equation (<ref>) should be solved with the number-density equation ρ =∑_k[1-ξ_k+Σ(k)/E_k]. In the end of this section, to be self-contained we review the finite-range effect on a two-body problem in the present model. Note that the behavior of E_ b in the present model has already been reported in Ref. <cit.>. Figure <ref> shows the solution of the two-body binding energy E_ b as a function of the ratio between the effective range R and the scattering length a. One can check that the zero-range result E_ b,0=1/ma^2 can be obtained in the limit of R/a→ 0. In this model, E_ b has two solutions for each R/a. We focus on the solution of smaller E_ b (solid line in Fig <ref>) because the other solution is unphysically large to discuss the low-energy properties of the present system. If one increases the effective range R, E_ b is enlarged up to R/a=e^-1/2≃0.607. § RESULTS AND DISCUSSION In this section, we present the numerical results of the HFB theory for the 2D BCS-BEC crossover with the finite-range interaction. The results are obtained by solving Eqs. (<ref>), (<ref>), and (<ref>) self-consistently. Figure <ref> shows the pairing order parameter Δ, which characterizes the superfluid order in this system, as a function of the dimensionless coupling parameter log(k_ F a) in the 2D BCS-BEC crossover. The blue dashed lines in Fig. <ref> are the results of contact-type interaction given by <cit.> Δ(R/a→ 0) = √(2E_ FE_ b,0), where E_ F=k_ F^2/2m is the Fermi energy. Δ/E_ F is plotted in panels (a), (b), and (c) with different R/a, while Δ/E_ b is plotted in panels (d), (e), and (f). The black solid lines represent the results of the HFB theory with the finite-range interaction. For comparison, we also show the results with the finite-range interaction without the HF self-energy Σ(k) (the red dashed lines). This calculation is similar to that used in the previous work for a layered superconductor in the BCS-BEC crossover regime at T=0 <cit.>. In the dilute BEC region (log(k_ Fa)≲ 1), the pairing gap is enlarged by the finite-range effect. This is because the binding energy is enhanced by this effect <cit.> as shown in Fig. <ref>. In this regard, fermions can form Cooper pairs more easily than in the case with the contact-type interaction. On the other hand, in the dense BCS region (log(k_ Fa) 1), the formation of Cooper pair is suppressed by the finite-range effect. Since introducing the finite-range effect is equivalent to introducing the high-momentum cutoff (i.e., Λ), the pairing order originating from the pairing near the Fermi surface is suppressed when k_ F is comparable with Λ. Indeed, the ratio between k_ F and Λ is given by k_ F/Λ=k_ FR√(m|G|/4π). In this regard, when R/a becomes larger, the suppression of BCS pairing becomes more remarkable. In the lower panels of Fig. <ref>, the plotted Δ is normalized by E_ b, which is independent of ρ, to clarify the density dependence of Δ. In Fig. <ref>(d), the finite-range results are similar to the contact-type result, since the parameter R/a is sufficiently small. Δ/E_ b,0 with the contact-type coupling increases monotonically as Δ(R/a→ 0)/E_ b,0 = k_ Fa. However, at larger R shown in Figs. <ref>(e) and (f), the peak structure of Δ/E_ b can be found with the density dependence [namely, the log(k_ Fa) dependence] in the finite-range calculations. This difference clearly manifests the suppression of the BCS-type pairing due to the finite-range correction. For more details, in Fig. <ref>(e), when R/a increases, the peak structure of the finite-range results can be found at log(k_ Fa) ∼ 1.5. In Fig. <ref>(f), for larger R/a such a peak structure is more pronounced and shifted toward log(k_ Fa) ∼ 1.0. In the experiment of the layered superconductor system <cit.>, a similar peak structure has been found as the comparison with the theoretical results has been reported <cit.>. Such a peak structure is unique to the finite-range interaction and not found in systems with the contact-type interaction. Therefore, in order to simulate these superconductor systems by using ultracold atom systems, it is necessary to tune the effective range in addition to the scattering length by using e.g., the optical field method <cit.>. Figure <ref> shows the position of the peaks with respect to the ratio R/a. These were obtained by applying the Lagrange interpolation to the numerical data to pick up the maximum value of Δ/E_ b. In the limit of R/a→0, the peak position can be at an infinitely large log(k_ Fa) as the peak structure does not exist in the system with the contact-type interaction. The peak position log(k_ Fa)_ peak tends to decrease monotonically when R/a increases. This indicates that for larger R/a the peak can be found at lower densities. This result would be useful to determine the parameters of interaction from the experiment results with the finite-range interaction, when one tries to qualitatively examine the finite-range properties in condensed matter systems as well as in future cold atom experiments. To see more detailed properties of the density-induced BCS-BEC crossover with the finite range interaction, in Fig. <ref> we show the results of the chemical potential μ at different effective ranges: (a) R/a=0.12, (b) R/a=0.27, and (c) R/a=0.52. The blue dashed lines are the results with the contact-type interaction given by μ = E_ F-E_ b,0/2. While μ≃ E_ F is found in the BCS side, μ≃ -E_ b,0/2 in the BEC side as μ represents the change of the energy when a single particle is added to the system. In this regard, μ is regarded as a thermodynamic quantity well characterizing the BCS-BEC crossover <cit.>. One can see that the finite-range effect suppresses μ in the whole crossover regime. To understand this suppression of μ in more detail, we also show the results with the finite-range interaction but without Σ(k) (the red dashed line) for comparison. It is found that μ is lowered when E_ b is enlarged by the finite-range effect. The reduction of E_ b is remarkably important in the BEC side as we discussed for the behavior of Δ. It is also found that Σ(k) (the black solid line) further suppresses μ compared to the case without Σ(k) in the entire crossover region. The effect of Σ(k) is found to be large with increasing R/a but sufficiently small in the dilute BEC side. In the dense BCS side, generally Σ(k) gives a significant shift of μ. This shift of μ is directly related to Σ(k≃0) in the quasiparticle dispersion E_k=√({k^2/2m -μ+Σ(k)}^2+|Δ(k)|^2). Figure <ref> shows Σ(k) at R/a=0.52, where log(k_ Fa)=1.0 and 0.0 are considered. Indeed, the shift of μ induced by Σ(k), given by Σ(k≃0)≃ -E_ F at log(k_ Fa)=0.0 and Σ(k≃0)≃ -0.9E_ F at log(k_ Fa)=1.0, are close to the differences between the results with and without Σ(k) in Fig. <ref>(c). We note that Σ(|k|≃ k_ F) is also similar to Σ(k≃ 0) in this regime. This result indicates that the momentum-independent Hartree shift used in Refs. <cit.> gives a reasonable approximation. The momentum dependence of Σ(k) in Fig. <ref> is characterized by Γ_k and hence Λ. At low energy, the momentum dependence of Σ(k) may lead to the effective mass correction <cit.>. The zero-crossing point of μ, which indicates the interaction strength where the underlying Fermi surface is depleted by the pair formation, has conveniently been regarded as the crossover boundary between the BCS and BEC sides <cit.> whereas there are no distinct phase boundaries between them. While the zero-crossing point of μ can be found for arbitrary R/a as shown in Fig. <ref>, such a point is quantitatively shifted by the finite-range correction through Σ(k). In contrast to the zero-range case, where the HF self-energy is trivially zero and μ is mainly reduced by the pair formation, we need to consider two possibilities of the reduced μ in the case with the finite-range interaction, that is, the pair formation and the HF self-energy shift. In other words, at large R/a, μ can be strongly suppressed by Σ(k) even with the small pairing effect. In this regard, one needs to carefully examine μ when trying to use μ the measure of the BCS-BEC crossover with the finite-range interaction. Finally, to further investigate microscopic properties of Cooper pairs in the present system, in Fig. <ref> the result of the pair-correlation length ξ_ pair is plotted, where ξ_ pair is defined by <cit.> ξ_ pair^2=∑_k|∇_kϕ(k)|^2/∑_k|ϕ(k)|^2. The pair-correlation length is also regarded as a useful quantity to characterize the BCS-BEC crossover <cit.>. In the dilute BEC regime, fermions form tightly bound bosonic molecules and hence ξ_ pair becomes smaller. In the dense BCS regime, loosely-bound Cooper pairs are formed and their sizes are typically larger than the mean interparticle distance given by k_ F^-1. This behavior is not changed significantly by the finite effective range correction. Entirely, the finite-range effect enlarge ξ_ pair in the crossover regime. In particular, in the dense BCS side (log(k_ Fa) 1), ξ_ pair is dramatically enlarged by the effective range correction, indicating the suppression of the BCS-type pairing by the finite-range effect. While μ is suppressed by the finite-range effect through two mechanisms, that is, the Cooper pairing and the HF self-energy shift, ξ_ pair monotonically increases with R/a and is more directly related to the Cooper pairing effect. One may expect that the enhanced ξ_ pair is also related to the tremendous suppression of Δ and the resulting peak structure in the density dependence of Δ/E_ b in the dense BCS regime as shown in Fig. <ref>. In this regard, the finite-range effect can be found in a different way through the different physical quantities, and ξ_ pair would be more convenient than μ to characterize the density-induced BCS-BEC crossover with the finite-range interaction. In addition, k_ Fξ_ pair=1 can be used as a crossover boundary between the BCS and BEC regimes. In such a viewpoint, the density with k_ Fξ_ pair=1 is shifted toward the lower densities when R/a increases. This result again indicates the suppression of the Cooper pairing near the Fermi surface. This is in contrast to the shift of zero-crossing point of μ toward higher densities with increasing R/a. We note that in the HFB framework the effective Fermi surface locates at the shifted chemical potential μ^*=μ-Σ(|k|≃ k_ F) <cit.> and therefore negative μ does not immediately mean the disappearance of the Fermi surface as we find that the zero-range result and the finite-range result without Σ(k) in Fig. <ref> are close to each other in the dense BCS regime. In this way, one can understand that k_ Fξ_ pair=1 is a more appropriate indicator of the BCS-BEC crossover than μ=0. We also remark that the cluster size (corresponding to ξ_ pair in this paper) is highly important to understand the microscopic properties of the density-induced hadron-quark crossover <cit.>. Indeed, the overlapped three-body state, which is larger than the interparticle distance, can be anticipated in the dense regime <cit.>. Our study on the role of the finite effective range for the pair size would be useful for further extensions to other crossover phenomena. § SUMMARY AND PERSPECTIVES In this paper, we have theoretically investigated the finite-range effect in the 2D Fermi gas system throughout the BCS-BEC crossover by using the Hartree-Fock-Bogoliubov theory. Using the finite-range separable interaction, we have numerically solved the particle number equation and the gap equation self-consistently. The momentum-dependent HF self-energy, which were ignored in previous studies, has been considered in the numerical calculation. The finite-range effects for the pairing gap, the chemical potential, and the pair size are studied systematically. In particular, the finite-range effect works on the pairing gap in different ways in the BCS and BEC sides, respectively. While the pairing gap is enhanced by the finite-range effect (through the enhancement of the two-body binding energy) in the BEC side, it is suppressed by the finite-range effect in the BCS side because the effective pairing interaction near the Fermi surface is suppressed by the cutoff associated with the effective range. Furthermore, the maximal behavior of Δ normalized by the density-independent scale is identified as density dependent and the peak density is plotted as a function of the effective range. For the suppression of chemical potential μ by the finite-range effect, there are two mechanisms, that is, the enhanced pairing correlations and the HF self-energy shift. In this regard, one needs to carefully examine the effective-range correction to understand the behavior of μ in the density-induced BCS-BEC crossover. Finally, we have examined the finite-range effect on the pair correlation length throughout the density-induced BCS-BEC crossover. The pair size is found to be monotonically enlarged by the finite effective range in the whole crossover region and gives a useful measure for the BCS-BEC crossover from a microscopic viewpoint. For future perspectives, for further understanding of the connection between clean cold atom systems and other condensed matter systems, it is important to generalize our approach to more realistic interaction model, such as the Rytova-Keldysh potential <cit.>. To obtain more quantitative results, the HF self-energy can be further renormalized by using the Brueckner G-matrix <cit.>, where the repeated scattering process is effectively included. Also, while we have focused on the ground-state properties at zero temperature, it is an interesting future work to examine how the Berezinskii-Kosterlitz-Thouless transition is modified by the finite-range effect and the associated HF self-energy shift <cit.>. § ACKNOWLEDGEMENTS H.S. was supported by RIKEN Junior Research Associate Program. H.T. acknowledges the JSPS Grants-in-Aid for Scientific Research under Grant Nos. 18H05406, 22K13981, and 22H01158. H.L. acknowledges the JSPS Grant-in-Aid for Early-Career Scientists under Grant No. 18K13549, the JSPS Grant-in-Aid for Scientific Research (S) under Grant No. 20H05648, and the RIKEN Pioneering Project: Evolution of Matter in the Universe. ptephy
http://arxiv.org/abs/2306.09012v1
20230615101210
Yes, we CANN: Constrained Approximate Nearest Neighbors for local feature-based visual localization
[ "Dror Aiger", "André Araujo", "Simon Lynen" ]
cs.CV
[ "cs.CV" ]
Yes, we CANN: Constrained Approximate Nearest Neighbors for local feature-based visual localization Dror Aiger André Araujo Simon Lynen Google Research {aigerd,andrearaujo,slynen}@google.com July 31, 2023 ============================================================================================================== Large-scale visual localization systems continue to rely on 3D point clouds built from image collections using structure-from-motion. While the 3D points in these models are represented using local image features, directly matching a query image's local features against the point cloud is challenging due to the scale of the nearest-neighbor search problem. Many recent approaches to visual localization have thus proposed a hybrid method, where first a global (per image) embedding is used to retrieve a small subset of database images, and local features of the query are matched only against those. It seems to have become common belief that global embeddings are critical for said image-retrieval in visual localization, despite the significant downside of having to compute two feature types for each query image. In this paper, we take a step back from this assumption and propose Constrained Approximate Nearest Neighbors (CANN), a joint solution of k-nearest-neighbors across both the geometry and appearance space using only local features. We first derive the theoretical foundation for k-nearest-neighbor retrieval across multiple metrics and then showcase how CANN improves visual localization. Our experiments on public localization benchmarks demonstrate that our method significantly outperforms both state-of-the-art global feature-based retrieval and approaches using local feature aggregation schemes. Moreover, it is an order of magnitude faster in both index and query time than feature aggregation schemes for these datasets. Code will be released. § INTRODUCTION In this paper we focus on the problem of image retrieval for visual localization. Modern visual localization approaches are predominantly based on 3D point clouds that represent the geometry and appearance of large scale scenes <cit.>. These 3D points are estimated from image collections using Structure-from-Motion (SfM), where each 3D point has an associated descriptor derived from pixels. To localize a query image against such 3D models, a set of local features is extracted from it and 2D-3D correspondences are estimated based on descriptor similarity. In practice, this data association problem suffers from various challenges: visual aliasing, scene change, noise, etc. Because the final localization solution is computed using geometric inference from these 2D-3D correspondences, not finding enough correct matches can lead the entire localization process to fail. Simply establishing many more matches per query keypoint (red points in fig:teaser) however causes long runtime in geometric verification <cit.>. It is thus important to find a small 2D-3D set which has high probability to contain “good" matches (yellow/green points in fig:teaser): In fact we know that the 3D points of “good” matches should all lie inside one (unknown) camera frustum which is the one of the query image (shaded area in fig:teaser). There exist several approximations to this problem, ranging from clustering nearest-neighbor matches in the 3D model's covisibility graph <cit.> to using image retrieval methods to obtain a small set of candidate images for which local features are matched subsequently <cit.>. The latter approach, leveraging recent advances in global (per image) embeddings, has gained substantial traction recently <cit.>, to a degree that it appears the community has abandoned the idea of finding a solution that jointly solves for appearance and geometry using local features only. For example, the benchmark we evaluate on <cit.> didn't even consider local feature based retrieval approach at publication time. We don't consider the case of using local features closed and therefore propose an approach to obtain matches that are close in appearance space while obtaining geometric consistency at the same time – which is a long-known open question in the community. Contributions. In this paper we make three contributions: (1) Our first and main contribution is a new method, referred to as Constrained Approximate Nearest Neighbors (CANN), that efficiently obtains a high quality, small set of 2D-3D correspondences. CANN performs nearest neighbor search in descriptor space in a constrained manner, so that matches are compact in 3D space. We provide both a brute-force solution as well as an efficient implementation and associated complexity analysis of this colored nearest neighbor search algorithm. (2) Our second contribution is to make the connection of colored nearest neighbor search to the problem space of image retrieval and localization, proposing a metric to rank cameras, which can serve as a way to evaluate future work in this area. (3) Lastly we provide an extensive evaluation of both global and local feature based methods on four large scale datasets from <cit.>: “Baidu-Mall",“Gangnam Station",“RobotCar Seasons" and “Aachen Day-Night v1.1". We demonstrate that local feature based methods are not only competitive, but in fact strongly outperform global embedding based approaches; which goes contrary to the trend in the community. We hope to provide new impulse to techniques that aim for jointly searching in appearance and geometry space, which is more efficient and elegant than previously proposed two-step approaches. § RELATED WORK Visual Localization using local features without image retrieval: A large body of work in visual localization <cit.> is based on sparse 3D point clouds built from image collections using Structure-from-Motion (SfM). These methods directly establish 2D-3D matches between local features from the query image and the descriptors associated with 3D points in the model. As mentioned before, these matches often contain many outliers and thus directly feeding them to geometric verification is typically impractical<cit.>. Therefore several post-filtering techniques have been proposed, such as clustering in the SfM covisibility graph <cit.> or applying voting in the space of camera poses <cit.>. Remaining 2D-3D matches typically have a sufficiently low fraction of outliers, so that they can be efficiently processed by geometric verification, using minimal pose solvers <cit.> in a RANSAC <cit.> scheme. Visual Localization using local features for retrieval and 2D-3D matching: Image retrieval approaches promise to both reduce the cost of matching against features in the SfM model and achieving high quality matches by limiting the search to only a subset of the model<cit.>. Such approaches either remove features that don't belong to top-ranking images or perform an additional matching step to top-ranking images before running geometry verification using the obtained local feature matches. Our proposed algorithm provides an alternative to these two-step filtering approaches, by directly optimizing for compactness of nearest neighbor matches in the covisibility graph or 3D space during the search. Visual Localization using global features for retrieval and local features for 2D-3D matching: Image retrieval using local features however has most recently lost attention from the community and instead global features (, DELG-GLDv2 <cit.> and AP-GeM <cit.>) have dominated benchmarks <cit.>. While using global features offers significant speedups due to the much smaller database size, the full-image embeddings are not appropriate for high quality localization due to their global nature <cit.>. In order to obtain an accurate localization result, some approaches <cit.> compute additionally local features, which are matched only between the query image and top-ranking images from the database. While there are attempts to concurrently compute local and global features to reduce cost/latency <cit.>, the accuracy of the local feature keypoints remain inferior to approaches that compute dedicated local features <cit.>. Local feature-based image retrieval techniques: Despite the image retrieval community's recent focus on global features, local feature-based retrieval has a long history, with well-established methods <cit.>. Among these, the most relevant method today is the Aggregated Selective Match Kernels (ASMK), which continues to be explored recently in conjunction with deep-learned local features <cit.>. ASMK (like VLAD <cit.>) performs local descriptor aggregation and essentially produces high-dimensional global image representations, which are however sparse and can be searched efficiently. In contrast, our method operates directly on local descriptor space and avoids aggregation, which makes it more suitable to match against partial views and unique details that do not get lost in aggregation. Approximate nearest neighbor methods: Another related field is the area of proximity problems in high dimensional spaces with its many applications in computer vision <cit.> (to name a few). The most common of this kind is nearest neighbor search, where given a set P of n points in a high-dimensional space R^d we wish to find the point(s) in P closest to a query point q. Extensive research on this problem has led to a variety of interesting solutions, both exact and approximate <cit.>. In many use cases, indexed points in the “database" are equipped with additional attributes, such vector-valued attributes or simple scalars, such as an ID (“color") that indicates a grouping of points. The term Constrained Approximate Nearest Neighbors that we propose in this paper refers to a way to apply nearest neighbors in one space given constraints in the space of these attributes. The simplest such case is “colored nearest neighbor search": each point in P is assigned with an ID and for a given query point q (with or without colors), we want to use the IDs of points in P as constraints during the search. A simple example, which is the use case in this work, is to return nearest neighbors for all query points, provided that all of the neighbors have the same ID. The optimal result are those points in P that all have the same ID and optimize some metric, such as the sum of distances to the query points. Colored range searching and nearest neighbors (also known as “categorical range searching”, or “generalized range searching”) have been extensively studied in computational geometry since the 1990s <cit.>. The colored versions of nearest neighbor (or range searching) problems tend to be harder than their uncolored counterparts and several different problems and solutions were proposed, see e.g. <cit.>. To the best of our knowledge, no previous problem and solution fits into the requirement that we need in this work and the Constrained Approximate Nearest Neighbor problem we address here is new. § METHOD §.§ Ranking Images for Visual Localization using Constrained Approximate Nearest Neighbors We first propose a natural metric to rank cameras and then show that this ranking can be efficiently computed during the feature-matching stage instead of requiring post processing. For simplicity of presentation we consider the case of a single optimal camera/image from the index. This is without loss of generality, since in practice, we may use k-best cameras or simply weight matches by the rank of each image. The metric: We are given a large d-dimensional space containing local feature descriptors extracted from all images in a large collection. Denote I={0,1,2,…} the set of image IDs in that collection. We assign each local descriptor the ID of the image, i ∈ I, it was computed from, so we obtain the set P of “ID-colored" points (see colors in fig:cann on the left). Then, at query time, for a query image with a set of features Q={q_j} extracted from it, let d_ij=d(q_j,NN_i(q_j))/R be the (normalized) Euclidean distance in descriptor space between the feature q_j to its nearest neighbor descriptor in image i. R is some fixed maximum distance that we use for normalization such that d_ij∈ [0,1]. We then compute a score for each image i in the dataset s_i=∑_j(1.0-d_ij^p/1-p)^1-p/p and use it to rank all images with respect to the query image features q_j ∈ Q. To obtain this per-image compact set of descriptors from the set of all indexed descriptors P (with their “ID-color"), we have to develop an efficient colored version of nearest neighbors. Such algorithm obtains the nearest neighbor of each q_j for all colors at once, provided that its distance is at most R. We observe that depending on a tuned parameter p, we can crop the distances at R such that all distances larger than R have score at most some very small value (say 10^-6). This allows to get good bound on the runtime of the search for NN. Figure <ref> shows our metric. §.§ Preliminaries To explain the proposed Constrained Approximate Nearest Neighbors algorithm we refer to standard tools like Approximate Nearest Neighbors (ANN) and Approximate Range searching (RS) and make the (common) assumption that there is a maximum distance, R, known at local descriptor indexing time. We also assume that randomization is allowed, i.e. all results are correct with (arbitrary) high probability. Details on the exact definitions of ANN and RS for the case of bounded distance can be found in <cit.>. We can assume (for simplicity of presentation) that ANN and RS data structures can be created in O(C_I(d,c)*n) and a point query takes O(C_q(d,c)+k) time, C_q(d,c) is a constant depending on the dimension d and the approximation factor, c and k is the maximum number of items it returns. For image retrieval, this runtime is multiplied by the number of features in the image, |Q|. Colored Nearest Neighbors vs Colored Range SearchingAs can be seen from Equation <ref>, we need a colored NN data structure to compute the scores for all relevant images given one query point q_j. Such algorithm returns for each q_j the set of 1-NN in all cameras within radius R. We see from the metric that cameras without such neighbor don't contribute to the sum, so we want as many neighbors with as low Euclidean distance from the query as possible. We are not aware of any efficient algorithm to perform this with a better time complexity than a brute force method using |I| separate NN structures (See Section <ref>). Fortunately, we can reduce this colored NN problem to a fixed R colored range searching which can be implemented efficiently. A reduction from the fixed radius decision problem: “is there a point within distance R from the query" to the approximate NN is well known from LSH <cit.> using a form of binary search over several R's. While this approach isn't directly applicable for colored searches, we can use similar ideas as outlined in the following section. §.§ Colored Range Searching In this section we explain the colored nearest neighbor search for computing the scores in Eq. (<ref>). While there are multiple versions of this problem, we're specifically interested in colored range reporting: For a set of colored points in R^d, report all the distinct colors in the query range. Even with approximations, this problem is computationally hard with O(C_q(d,c)+|I|) <cit.> as lower bound on the runtime. For a large set of images this bound can be very high, yet in practice it can be solved quite efficiently by introducing the threshold distance R. The most recent work <cit.> on this exact problem shows that the problem is already hard for low dimensional spaces, even with integer coordinates and considering only orthogonal queries (an axis-aligned box vs. a sphere). For a set of n colored points in three dimensions, the authors of <cit.> describe a randomized data structure with O(n*polylog(n)) space that can report the distinct colors within an axis-aligned box using O(k*polyloglog(n)) time, with k as number of distinct colors in the range, assuming that coordinates are in {1, . . . , n}. In this paper we show that with R known at index time and allowing for approximation, we can develop a more efficient data structure for the colored range search that allows us to efficiently compute the 1-NN across all images at once. Besides being essential for solving the Constrained Nearest Neighbors problem we believe that this data-structure is interesting on its own and beyond the problem of image localization. §.§ A brute force method using ANN There exist two straightforward algorithms for colored range searching: First build |I| separate regular nearest neighbor structures, one for each color in O(C_q(d,c)*|P_I|*|I|) indexing time, with |P_I| as the color I in P. Then call them sequentially for each query point q_j with cost O(C_q(d,c) × |I|). This is independent of R and thus much worse than the above lower bound. The advantage is that the runtime of this version is asymptotically independent of the threshold radius R. The second simple algorithm, that we call CANN-RS, is applicable for small thresholds R using Randomized Range Searching <cit.>: We index points into a RS data-structure for radius R and then for each query feature, enumerate all neighbors within R, keeping a tally of neighbors per image I. Because we only retrieve a small subset of points within radius R we only obtain a few colors (cameras or images) with number of features, much less than |I|. This approach has runtime O(C_q(d,c) + k), here, k is the expected number of neighbors in each such range query over all images. The drawback is that for each query feature we must enumerate all indexed points in the range R where most of them do not realize any nearest neighbor for any color. This number (k above) can still be quite large. §.§ Efficient algorithm using Random Grids To implement an efficient variant of the algorithm (CANN-RG), we leverage Random Grids <cit.>, an efficient c-approximate nearest neighbor algorithm based on applying randomized rotations and shifts to high-dimensional vectors prior to hashing them to a set of keys. We extend the Random Grids to support colored range searching. We show that our algorithm avoids the enumeration of all points in the range R (as in CANN-RS) and doesn't require distance computation in descriptor space which can take most of the time in practice due to high dimensional feature descriptors. Our algorithm works as follows: For each query point q_j CANN-RG should report all colors in the range R from q_i approximately by factor c>1, i.e. any color that has a feature at distance at most R is reported with high probability and any reported color (image) has a feature at distance at most cR. The points are indexed using Algorithm <ref>, where we store a set of distinct integers using hash-sets and use hashing to create a key for each non-empty grid cell in a high dimensional space following <cit.>. At query time we retrieve points from the grid indices using Algorithm <ref>. Note that since we're only interested in the color of points within range R, the index only holds point colors not point coordinates and the query results similarly only comprise colors without exact distances. Analysis In this section we analyze indexing and query algorithms of CANN-RG. First we make concrete the constants C_I(d,c) and C_q(d,c) which appear in all Random Grids implementations: For the grid cell of size l*c/√(d), a random vector of length l in R^d will be captured in a given cell with probability at least e^-√(d)/w=e^-d/c <cit.>. We thus need L=e^d/c random grids in Algorithm <ref> to ensure that, if there is a point in P at distance at most R from q, its color will be found in at least one of the grid cells with constant probability. On the other hand, any color of a point found in a grid cell that also contains q_t (the rotated and translated version of q for that grid) is at distance at most cR from q due to the size of the grid cells. Because we do not care about the coordinates of indexed points, we only store each color at most once per grid cell. Therefore the data structure build time (C_I(d,c)*|P|)=O(e^d/c*|P|) and storage O(e^d/c*|P|) are linear in |P|. For each query point q, we retrieve the points in the grid cells where all rotated and shifted versions of q fall into. The runtime is then O(e^d/c+k_c)=O(C_q(d,c)+k_c) ignoring the constant for matrix rotation that depends on d. Note that for Random Grids implementation we have C_I(d,c)=C_q(d,c). In contrast to k in CANN-RS, k_c here refers to the number of distinct colors found in the enumerated cells. As in <cit.>, the probability of success can be amplified to 1-γ by repeating the randomized indexing ln(1/γ) times, which increases the data structure, space and query time accordingly. The number of grids that we need in practice is much smaller than the above worst case depending on the intrinsic dimension of the data <cit.>. Constructing and querying CANN-RG The above algorithms allow indexing colors of P for a given R such that for any query point q, the colors that have points at distance at most R from q are reported quickly. Given that we omitted the computation of point distances to enable efficient queries, we're still missing a way to compute the scores in Equation <ref>. We now show how we move from fixed radius Range Search to 1-NN. To fill this gap, let r be a constant denoting the minimum distance between points in P that we aim to distinguish. For each l ∈{rc^0, rc^1,..., R}, we generate a sequence of random grid indexes B^l = {B^l_i, . . . , B^l_n} of radius l. Then, given query point q, we query q in all B_i in order and keep only the closest (first observed) color. This maps the list of colors to the B^l_i they came from and thus to a c-approximate distance from the query. Given these minimum distances, Equation <ref> provides a score per point and thus a ranking of all index images by summing over all q_j ∈ Q. This scoring operation increases the runtime of the query by logarithmic factor of R/r. Note that CANN-RG is output sensitive on k_c, the number of actual neighbor colors we find for each query. § EXPERIMENTS §.§ Experimental setup Datasets: We evaluated our method on four public datasets from <cit.>, “Baidu-Mall",“Gangnam Station",“RobotCar Seasons" and “Aachen Day-Night v1.1". These datasets demonstrate performance in “regular" outdoor scenarios as well as repetitive indoor environments. “RobotCar Seasons" and “Aachen Day-Night v1.1" have day and night subsets. Metrics: We evaluated two metrics: (1) The image retrieval performance using the same equal weighted barycenter (EWB) interpolation as in <cit.> which is based solely on the retrieved images and their known poses. (2) The effect on final localization quality using the existing localization pipeline from <cit.> where camera localization is computed using only features from the top-k ranking images. Local and global feature baselines: Following <cit.>, we compared our method against state-of-the-art global features AP-GeM <cit.>, DELG <cit.>, DenseVLAD <cit.>, NetVLAD <cit.>. For local-features we compare performance and cost for both indexing and query to ASMK <cit.> with HOW and FIRE local features. Results for the latter were not previously published and only recently made available on the codebase for image retrieval methods. R2D2 features were computed using code from the same codebase. Storage cost for the baselines is discussed analytically. Local feature types: We experiment with three state-of-the-art local image features: HOW <cit.>, FIRE <cit.> and R2D2 <cit.>. These three approaches have different operation characteristics and thus show the power of CANN in being adaptable to different local features. HOW and FIRE are designed for image retrieval, and are not suitable to the local feature matching part of the visual localization pipeline. R2D2, on the other hand, is designed for image matching tasks and a common choice in structure-from-motion and visual localization evaluations <cit.>. We use a recent and lighter R2D2 version (referred to as “Feather2d2 20k") described in <cit.>'s codebase, where we can download the local features (the model is not publicly available). When using HOW and FIRE, our visual localization system requires indexing two different feature types: HOW for retrieval and R2D2 for matching. When using R2D2, we only need to index one feature type – which is appealing since it simplifies the overall system. For our experiments we used 1000 per image for all indexed and query images and all methods. Implementation details: We implemented CANN-RS and CANN-RG (Section <ref>) in C++, given that it performs well for low intrinsic dimensions of the features: 32D for R2D2 and 128D for HOW. Even though CANN-RS can be implemented with any of-the-shelf range search data structures, we used Random Grids also here as it has the ability to exploit the fact that we know the range in advance. The Random Grids were adjusted to different intrinsic dimensions by tuning its parameters, which is also required to trade off performance vs runtime using the c-approximation. Both our algorithms are very simple, trivially parallelized and are very fast (down to 20ms per query image). Tuning: The parameters of our metric are p and R and we tune them for each feature type separately. Note that in contrast to ASMK which creates a codebook that depends on the distribution of the data, CANN-RG and CANN-RS only tune for the metric itself. One can therefore provide theoretic bounds of the (approximate) algorithmic result quality for a given metric. This may make CANN more resilient to different datasets which is not the case for codebook methods, even though the latter can perform better if the distribution of features between query and training set matches. For CANN-RS, we set the grid cell size to slightly above 1/√(d) and the number of grids accordingly to balance result quality and runtime (see Section <ref>). For CANN-RG we set c=1.1 in all datasets and the metric parameters (p,R) were tuned using a subset of 500 queries from “Baidu-Mall" separately per local feature type. To the best of our knowledge, the datasets of <cit.> provide no tune/eval/test split and only the “Baidu-Mall" has ground-truth available to enable tuning. For ASMK we only evaluated R2D2 features, taking results for other features from  <cit.> or used previously unpublished results provided by the authors. We train the ASMK codebook on “GangnamStyle" as it is the largest set among the four. To validate generalization, we used the same set of parameters for evaluation on all other datasets. §.§ Results As mentioned above, we evaluate the CANN-RG and CANN-RS algorithms on four large-scale datasets, in an outdoor, urban setting and covering an indoor scenario. Following <cit.> we evaluate across two regimes/metrics (“EWB" and “SFM") discussed above. Figure <ref> shows our results of all methods and datasets with one figure per each metric. In general, we can observe local features outperforming global features almost everywhere and by a large margin. Datasets that are more appropriate for global features are those that have many approximately similar viewpoints in the index so there is almost always one close neighbor for a given query image. Local features are naturally better where the query contains only partial overlap with the indexed images. Qualitative results are available in the appendix. Runtime One of the main advantages of CANN-RG (and CANN-RS as well) comparing to ASMK for image retrieval using local features is its simplicity and its runtime in both indexing and query. Table <ref> shows numbers across datasets using HOW features. Since our implementation of CANN-RG and CANN-RS does not use GPU, we compared runtime on CPU using 48 cores. The table does not contain the codebook creation for ASMK and tuning for CANN-RG. CANN-RG has a nice run-time/quality trade-off: In its upper bound quality, we have the results of CANN-RS and with CANN-RG can pay in quality for much better runtime. The significance of this is that CANN-RG can achieve runtimes of a few milliseconds for query image, which is otherwise only possible with global features. Table <ref> provides results demonstrating the trade-off of runtime and quality. To obtain a cheaper, yet representative quality measure, we compute the EWB using the top-1 retrieved image. The indexing time for CANN-RG is larger due to the fact that we have factor O(logR) more data structures. Preliminary results on general image retrieval To re-emphasize the generalization of the algorithm and it's scalability (20-50ms per query image), we also evaluated it for general image retrieval on the ROxford dataset. Global retrieval benchmarks evaluate the full rank of all indexed images, which requires also scoring the tail of the retrieved images. Since ranking the tail of the index is not typically meaningful for local features, we evaluated a combination of CANN with global features by computing a weighted average of DELG and CANN-RG+HOW or CANN-RG+FIRE, for all image scores. We compare CANN and this combined approach to the SOTA for global/local features. Very recently, a new method called Correlation Verification <cit.> was published which is, to our knowledge the best performing method on the ROxford dataset. Correlation Verification however includes (significantly expensive) spatial verification of local features and is thus not comparable to CANN-RG which doesn't use geometry or spatial reasoning of features (out of the cameras). Like for localization, spatial reasoning is an additional step that can be applied on top of CANN-RG. Table <ref> shows comparisons of SOTA approaches including <cit.> with our proposed approach (bold). Limitations. Using local features throughout the stack requires that the entire map fit in memory. Approaches that use global features can be more easily scaled, in that the local features per spatial region are kept out-of-memory and are only loaded after image retrieval. § CONCLUSIONS In this paper, we proposed CANN, a novel nearest neighbor searching approach that finds the best matches in both appearance and geometry space to improve visual localization using only local features. Unlike the state-of-the-art in the field, which uses global features for image retrieval and local features for 2D-3D matching, our approach uses only local features, while providing significantly better performance than the state-of-the-art at very competitive runtime cost. By providing the relevant metric and theoretical foundation of the algorithm, as well as two efficient algorithmic solutions, we hope to inspire a revived interest in solving visual localization with local features only. § ADDITIONAL QUALITATIVE RESULTS We include additional qualitative results in Figures <ref>,<ref>,<ref>,<ref>,<ref>,<ref> taken from all datasets, showing that CANN retrieves good results also in images with heavy occlusions. Cases like these, where there is only partial overlap between the query image and database images are very difficult for global features. We use HOW <cit.> for local features with both CANN-RG (ours) and ASMK <cit.>. The query image is on the left and the top 5 retrieved images are on the right. Our method retrieves all correct images, while other methods retrieve occasionally incorrect images ranked high among the top 5. We see that some global methods retrieve incorrect images due to scene clutter or high-frequency textures, while CANN provides diverse set of correct results. In several cases, we see that CANN+HOW outperforms ASMK+HOW. Retrieved images are marked red (bad) or green (good). ieee_fullname
http://arxiv.org/abs/2307.00362v2
20230701151922
Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few Leaves
[ "Emmanuel Sam", "Benjamin Bergougnoux", "Petr A. Golovach", "Nello Blaser" ]
cs.DS
[ "cs.DS" ]
Kernelization for Finding Lineal Topologies with Many or Few Leaves E. Sam, B. Bergougnoux, P. Golovach, N. Blaser Department of Informatics, University of Bergen, Norway {emmanuel.sam,petr.golovach, nello.blaser}@uib.no Institute of Informatics, University of Warsaw, Poland [email protected] Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few LeavesThe research leading to these results has received funding from the Research Council of Norway via the projects (PCPC) (grant no. 274526) and BWCA (grant no. 314528). Emmanuel Sam10000-0001-7756-0901 Benjamin Bergougnoux20000-0002-6270-3663 Petr A. Golovach 10000-0002-2619-2990 Nello Blaser 10000-0001-9489-1657 ======================================================================================================================================================================================================================================================================= For a given graph G, a depth-first search (DFS) tree T of G is an r-rooted spanning tree such that every edge of G is either an edge of T or is between a descendant and an ancestor in T. A graph G together with a DFS tree is called a lineal topology 𝒯 = (G, r, T). Sam et al. (2023) initiated study of the parameterized complexity of the Min-LLT and Max-LLT problems which ask, given a graph G and an integer k≥ 0, whether G has a DFS tree with at most k and at least k leaves, respectively. Particularly, they showed that for the dual parameterization, where the tasks are to find DFS trees with at least n-k and at most n-k leaves, respectively, these problems are fixed-parameter tractable when parameterized by k. However, the proofs were based on Courcelle's theorem, thereby making the running times a tower of exponentials. We prove that both problems admit polynomial kernels with (k^3) vertices. In particular, this implies FPT algorithms running in k^(k)· n^O(1) time. We achieve these results by making use of a (k)-sized vertex cover structure associated with each problem. This also allows us to demonstrate polynomial kernels for Min-LLT and Max-LLT for the structural parameterization by the vertex cover number. § INTRODUCTION Depth-first search (DFS) is a well-known fundamental technique for visiting the vertices and exploring the edges of a graph <cit.>. For a given connected undirected graph with vertex set V(G) and edge set E(G), DFS explores E(G) by always choosing an edge incident to the most recently discovered vertex that still has unexplored edges. A selected edge, either leads to a new vertex or a vertex already discovered by the search. The set of edges that lead to a new vertex during the DFS define an r-rooted spanning tree T of G, called a depth-first spanning (DFS) tree, where r is the vertex from which the search started. This tree T has the property that each edge that is not in T connects an ancestor and a descendant of T. All rooted spanning trees of a finite graph with this property, irrespective of how they are computed, such as a Hamiltonian path, are generalized as trémaux trees <cit.>. Given a graph G and a DFS tree T rooted at a vertex r ∈ V(G), it is easy to see that the family 𝒯 of subsets of E(G) induced by the vertices in all subtrees of T with the same root r as T constitute a topology on E(G). For this reason, the triple (G, T, r) has been referred to as the lineal topology (LT) of G in <cit.>. Many existing applications of DFS and DFS trees — such as planarity testing and embedding <cit.>, finding connected and biconnected components of undirected graphs <cit.>, bipartite matching <cit.>, and graph layout <cit.> — only require one to find an arbitrary DFS tree of the given graph, which can be done in time O(n + m), where n and m are the number of vertices and edges of the graph. An application of a DFS tree, noted by Fellows et al. <cit.>, that calls for a DFS tree with minimum height is the use of DFS trees to structure the search space of backtracking algorithms for solving constraint satisfaction problems <cit.>. This motivated the authors to study the complexity of finding DFS trees of a graph G that optimize or near-optimize the maximum length or minimum length of the root-to-leaf paths in the DFS trees of G. They showed that the related decision problems are NP-complete and do not admit a polynomial-time absolute approximation algorithm unless P = NP. In this paper, we look at the Minimum Leafy LT (Min-LLT) and Maximum Leafy LT (Max-LLT) problems introduced by Sam et al. <cit.>. Given a graph G and an integer k≥ 0, Min-LLT and Max-LLT ask whether G has a DFS tree with at most k and at least k leaves, respectively. These two problems are related to the well-known NP-complete Minimum Leaf Spanning Tree (Min-LST) and Maximum Leaf Spanning Tree (Max-LST) <cit.>. Sam et al. <cit.> proved that Min-LLT and Max-LLT are NP-hard. Moreover, they proved that when parameterized by k, Min-LLT is para-NP-hard and Max-LLT is W[1]-hard. They also considered the dual parameterizations, namely, Dual Min-LLT and Dual Max-LLT, where the tasks are to find DFS trees with at least n-k and at most n-k leaves, respectively. They proved that Dual Min-LLT and Dual Max-LLT are both FPT parameterized by k. These FPT algorithms are, however, based on Courcelle's theorem <cit.>, which relates the expressibility of a graph property in monadic second order (MSO) logic to the existence of an algorithm that solves the problem in FPT-time with respect to treewidth <cit.>. As a by-product, their running times have a high exponential dependence on the treewidth and the length of the MSO formula expressing the property. §.§ Our Results We prove that Min-LLT and Max-LLT admit polynomial kernels when parameterized by the vertex cover number of the given graph. Formally, we prove the following theorem. Min-LLT and Max-LLT admit kernels with (τ^3) vertices when parameterized by the vertex cover number τ of the input graph. Based on these kernels, we show that Dual Min-LLT, and Dual Max-LLT admit polynomial kernels parameterized by k. Dual Min-LLT and Dual Max-LLT admit kernels with (k^3) vertices. This last result follows from a win-win situation as either (1) the input graph has a large vertex cover in terms of k and, consequently, both problems are trivially solvable or (2) the input graph has a small vertex cover, and we can use Theorem <ref>. Finally, we use our polynomial kernels to prove that Dual Min-LLT, and Dual Max-LLT admit FPT algorithms parameterized by k with low exponential dependency. Dual Min-LLT and Dual Max-LLT can be solved in k^(k)· n^(1) time. As the previously known FPT algorithm for each of these problems was based on Courcelle's theorem, our algorithms are the first FPT-algorithms constructed explicitly. §.§ Related Results Lu and Ravi <cit.> proved that the Min-LST, problem has no constant factor approximation unless P = NP. From a parameterization point of view, Prieto et al.<cit.> showed that this problem is W[P]-hard parameterized by the solution size k. The Max-LST problem is, however, FPT parameterized by k and has been studied extensively <cit.>. Dual Min-LLT is related to the well-studied k-Internal Spanning Tree problem <cit.>, which asks to decide whether a given graph admits a spanning tree with at most n-k leaves (or at least k internal vertices). Prieto et al.<cit.> were the first to show that the natural parameterized version of k-Internal Spanning Tree has a ^*(2^klogk)-time FPT algorithm and a (k^3)-vertex kernel. Later, the kernel was improved to (k^2), (3k), and (2k) by Prieto et al., Fomin et al.<cit.>, and Li et al. <cit.> respectively. The latter authors also gave what is now the fastest FPT algorithm for k-Internal Spanning Tree, which runs in ^*(4^k) time. An independency tree (IT) is a variant of a spanning tree whose leaves correspond to an independent set in the given graph. Given a connected graph on n ≥ 3, G has no IT if it has no DFS tree in which the leaves and the root are pairwise nonadjacent in G <cit.>. From a parameterization point of view, the Min Leaf IT (Internal) and Max Leaf IT (Internal) problems <cit.>, which ask, given a graph G and an integer k ≥ 0, whether G has an IT with at least k and at most k internal vertices, respectively, are related to Dual Min-LLT and Dual Max-LLT, respectively. Casel et al. <cit.> showed that, when parameterized by k, Min Leaf IT (Internal) has an ^*(4^k)-time algorithm and a 2k vertex kernel. They also proved that Max Leaf IT (Internal) parameterized by k has a ^*(18^k)-time algorithm and a (k2^k)-vertex kernel, but no polynomial kernel unless the polynomial hierarchy collapses to the third level. Their techniques, however, do not consider the properties of a DFS tree and, therefore, do not work for our problems. §.§ Organization of the paper Section <ref> contains basic terminologies relevant to graphs, DFS trees, and parameterized complexity necessary to understand the paper. In section <ref>, we first prove a lemma about how, given a graph G and a vertex cover of G, the internal vertices of any spanning tree of G relate to the given vertex cover. We then use this lemma to demonstrate a polynomial kernel for Min-LLT and Max-LLT for the structural parameterization by the vertex cover number of the graph. This is followed by the kernelization algorithms for Dual Min-LLT and Dual Max-LLT parameterized by k. In section <ref>, we devise FPT algorithms for Dual Min-LLT and Dual Max-LLT based on their polynomial kernels. Finally, we conclude the paper in section <ref> with remarks concerning future studies. § PRELIMINARIES We consider only simple finite graphs. We use V(G) and E(G) to denote the sets of vertices and edges, respectively, of a graph G. For a graph G, we denote the number of vertices |V(G)| and the number of edges |E(G)| of G by n and m, respectively, if this does not create confusion. For any vertex v ∈ V(G), the set N_G(v) denotes the neighbors of v in G and N_G[v] denotes its closed neighborhood N_G(v) ∪{v} in G. For a set of vertices X⊆ V, N_G(X)=(⋃_v∈ XN_G(v))∖ X. We omit the G in the subscript if the graph is clear from the context. For a vertex v, its degree is d_G(v)=|N_G(v)|. Given any two graphs G_1=(V_1, E_1) and G_2=(V_2, E_2), if V_1 ⊆ V_2 and E_1 ⊆ E_2 then G_1 is a subgraph of G_2, denoted by G_1 ⊆ G_2. If G_1 contains all the edges uv ∈ E_2 with u,v ∈ V_1, then we say G_1 is an induced subgraph of G_2, or V_1 induces G_1 in G_2, denoted by G[V_1]. If G_1 is such that it contains every vertex of G_2, i.e., if V_1 = V_2 then G_1 is a spanning subgraph of G_2. Given a set of vertices X ⊆ V(G), we express the induced subgraph G[V(G)∖ X] as G-X. If X={x}, we write V(G)∖ x instead of V(G)∖{x} and G-x instead of G-{x}. Given a graph G, a set of vertices S ⊆ V(G) is a vertex cover of G if, for every edge uv ∈ E(G), either u ∈ S or v ∈ S; the vertex cover number of G, denoted by τ(G), is the minimum size of a vertex cover. A set Y ⊆ V(G) is called an independent set, if for every vertex pair u, v ∈ Y, uv ∉ E(G). A matching M in a given graph G is a set of edges, no two of which share common vertices. A pendant vertex is a vertex with degree one. For definitions of basic tree terminologies including root, child, parent, ancestor, and descendant, we refer the reader to <cit.>. Given a graph G, we denote a spanning tree of G rooted at a vertex r ∈ V(G) by (T,r). When there is no ambiguity, we simply use T instead of (T,r). For a rooted tree T, a vertex v is a leaf if it has no descendants and v is an internal vertex if otherwise. A spanning tree T with a root r is a DFS tree rooted in r if for very every edge uv∈ E(G), either uv∈ E(T), or v is a descendant of u in T, or u is a descendant of v in T. Equivalently, T is a DFS tree if it can be produced by the classical depth-first search (DFS) algorithm <cit.>. We say that a path P in a rooted tree T is a root-to-leaf path if one of its end-vertices is the root and the other is a leaf of T. Now we review some important concepts of Parameterized complexity (PC) relevant to the work reported herein. For more details about PC, we refer the reader to <cit.>. Let Σ be a fixed finite alphabet. A parameterized problem is a language P ⊆Σ^∗×. Given an instance (x,k) ∈Σ^∗× of a parameterized problem, k ∈ is called the parameter, and the task is to determine whether (x,k) belongs to P. A parameterized problem P is classified as fixed-parameter tractable (FPT) if there exists an algorithm that answers the question (x,k) ∈ P? in time f(k)· poly(|x|), where f: → is a computable function. A kernelization algorithm, or simply a kernel, for a parameterized problem P is a function ϕ that maps an instance (x,k) of P to an instance (x',k') of P such that the following properties are satisfied: * (x,k) ∈ P if and only if (x',k') ∈ P, * k'+|x'| ≤ g(k) for some computable function g:→, and * ϕ is computable in time polynomial in |x| and k. If the upper-bound g(·) of the kernel (Property <ref>) is polynomial (linear) in terms of the parameter k, then we say that P admits a polynomial (linear) kernel. It is common to write a kernelization algorithm as a series of reduction rules. A reduction rule is a polynomial-time algorithm that transform an instance (x,k) to an equivalent instance (x',k') such that Property  <ref> is fulfilled. Property <ref> is referred to as the safeness or correctness of the rule. § KERNELIZATION In this section, we demonstrate polynomial kernels for Dual Min-LLT and Dual Max-LLT. But first, we show that Min-LLT and Max-LLT admit polynomial kernels when parameterized by the vertex cover number of the input graph. The following simple lemma is crucial for our kernelization algorithms. Let G be a connected graph and let S be a vertex cover of G. Then every rooted spanning tree T of G has at most 2|S| internal vertices and at most |S| internal vertices are not in S. Let T be a rooted spanning tree tree of G with a set of internal vertices X. For every vertex v of T, we denote by (v) the set of its childred in T. For each internal vertex v of T, we have (v)≠∅ and if v∉ S, then (v)⊆ S because S is a vertex cover of G. Moreover, for any distinct internal vertices u and v of T, (u)∩(v)=∅. Given X∖ S= {v_1,…,v_t}, we deduce that (v_1),…,(v_t) are pairwise disjoint and non-empty subsets of S. We conclude that |X∖ S|≤ |S| and |X|≤ 2 |S|. We also use the following folklore observation. The set of internal vertices of any DFS tree T of a connected graph G is a vertex cover of G. To see the claim, it is sufficient to observe that any leaf of a DFS tree T is adjacent in G only to its ancestors, that is, to internal vertices. We use Lemma <ref> to show that, given a vertex cover, we can reduce the size of the input graph for both Min-LLT and Max-LLT. There is a polynomial-time algorithm that, given a connected graph G together with a vertex cover S of size s, outputs a graph G' with at most s^2(s-1)+3s vertices such that for every integer t≥ 0, G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. Let G be a connected graph and let S be a vertex cover of G of size s. As the lemma is trivial if s=0, we assume that s≥ 1. Denote I=V(G)∖ S; note that I is an independent set. We apply the following two reduction rules to reduce the size of G. The first rule reduces the number of pendant vertices. To describe the rule, denote by (v) for v∈ S the set of pendant vertices of I adjacent to v. RuleRule [bth] v∈ S |(v)| > 2delete all but two vertices in (v) from G 1() To see that Rule <ref> is safe, denote by G' the graph obtained from G by the application of the rule. Notice that for every v∈ S, at most one vertex of (v) is the root and the other vertices are leaves that are children of v in any rooted spanning tree T of G. Let T be a DFS tree of G rooted in r with t internal vertices. Because for every v∈ S, the vertices of (v) have the same neighborhood in G and Rule <ref> does not delete all the vertices of (v), we can assume without loss of generality that r∈ V(G'). Let T'=T[V(G')]. Because the deleted vertices are leaves of T, we have that T' is a tree and, moreover, T' is a DFS tree of G' rooted in r. Clearly, each internal vertex of T' is an internal vertex of T. Let v∈ S be a vertex such that |(v)|>2. Then v has a pendant neighbor u≠ r in G' and u should be a child of v in T'. Thus, v is an internal vertex of T'. This implies that every leaf v of T' is not adjacent to any vertex of V(G)∖ V(G') in G. Hence, v is a leaf of T. Because the deleted vertices are leaves of T, we obtain that a vertex v∈ V(G) is an internal vertex of T if and only if v is an internal vertex of T'. Then T and T' have the same number of internal vertices. For the opposite direction, let T' be a DFS tree of G' rooted in r with t internal vertices. We construct the tree T from T' by adding each deleted vertex u as a leaf to T': if u∈ V(G)∖ V(G'), then u∈(v) for some v∈ S and we add u as a leaf child of v. Because the deleted vertices are pendants, we have that T is a DFS tree of G. Observe that each internal vertex of T' remains internal in T. In the same way as above, we observe that a vertex v∈ S with |(v)|>2 cannot be a leaf of T', because v has a pendant neighbor in G' distinct from r that should be a child of v. Hence, every leaf v of T' is not adjacent to any vertex of V(G)∖ V(G') in G and, therefore, is a leaf of T. Since the deleted vertices are leaves of T, we obtain that a vertex v∈ V(G) is an internal vertex of T if and only if v is an internal vertex of T'. Thus, T and T' have the same number of internal vertices. This concludes the safeness proof. The next rule is used to reduce the number of nonpendant vertices of I. For each pair of vertices u, v ∈ S, we use common neighbor of u and v to refer to a vertex w ∈ I that is adjacent to both u and v and denote by W_uv the set of common neighbors of u and v. Rule <ref> is based on the observation that if the size of W_uv for any vertex pair u,v ∈ S is at least 2s+1, then it follows from Lemma <ref> that every spanning tree T contains at most s internal vertices and at least s+1 leaves from W_uv. We prove that it is enough to keep at most 2s vertices from W_uv for each u,v∈ S. pairs {u,v} of distinct vertices of SLabel max{|W_uv|,2s} vertices in W_uv Delete the unlabeled vertices of I with at least two neighbors in S from G. 2() To show that Rule <ref> is safe, let x∈ I be a vertex with at least two neighbors in S which is not labeled by Rule <ref>. Let G'=G-x. We claim that G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. We use the following auxiliary claim, the proof of which can be found in Appendix <ref>. (i) For any DFS tree T of G, the vertices of N_G(x) are vertices of a root-to-leaf path of T. (ii) For any DFS tree T' of G', the vertices of N_G(x) are vertices of a root-to-leaf path of T'. (iii) For any DFS tree T' of G', every vertex of N_G(x) is an internal vertex of T'. We use Claim <ref> to show the following property. If G has a DFS tree with t internal vertices, then G has a DFS tree T with t internal vertices such that x is a leaf of T. Let T be a DFS tree of G with a root r that has exactly t internal vertices. We prove that if x is an internal vertex of T, then T can be modified in such a way that x would become a leaf. Observe that by Claim <ref> (i), x has a unique child v in T. We have two cases depending on whether x=r or has a parent u. Suppose first that x=r. By Claim <ref>, the neighbors of x in G are vertices of some root-to-leaf path of T. Let u be the neighbor of x at maximum distance from r in T. Because d_G(x)≥ 2, u≠ v. Since x is not labeled by Rule <ref>, |W_uv|>2s. By Lemma <ref>, there are at least s+1 vertices W_uv that are leaves of T. These leaves have their parents in S which has size s. By the pigeonhole principle, there are distinct leaves w,w'∈ W_uv with the same parent. We rearrange T by making w a root with the unique child v and making x a leaf with the parent u. Denote by T' the obtained tree. Because x is adjacent to u and some of its ancestors in T and w is adjacent only to some of its ancestors in T, we conclude that T' is a feasible DFS tree. Notice that w which was a leaf of T became an internal vertex of T' and x that was an internal vertex is now a leaf. Because x is a leaf of T', we have that T”=T'-x is a DFS tree of G' rooted in w. By Claim <ref> (iii), u is an internal vertex of T”. This implies that u is an internal vertex of both T and T'. Since the parent of w in T has w'≠ w as a child, we also have that w is an internal vertex of both T and T'. Therefore, T and T' have the same number of internal vertices. This proves that G has a DFS tree T' with t internal vertices such that x is a leaf of T'. Assume now that x has a parent u in T. By Claim <ref>, the neighbors of x in G are vertices of some root-to-leaf path of T. Denote by v' be the neighbor of x at maximum distance from r in T; it may happen that v'=v. As x is not labeled by Rule <ref>, |W_uv|>2s. Then by Lemma <ref>, there are at least s+1 vertices W_uv that are leaves of T. These leaves have their parents in S which has size s. By the pigeonhole principle, there are distinct leaves w,w'∈ W_uv with the same parent. We rearrange T by making w a child of u and a parent of v and making x a leaf with the parent v'. Denote by T' the obtained tree. Because x is adjacent to v' and some of its ancestors in T and w is adjacent only to some of its ancestors in T, including u and v, we have that T' is a feasible DFS tree. Notice that w was a leaf of T and is now an internal vertex of T', while x was an internal vertex in T and is now a leaf in T'. Because x is a leaf of T', we have that T”=T'-x is a DFS tree of G' rooted in w. By Claim <ref> (iii), v' is an internal vertex of T”. Therefore, v' is an internal vertex of both T and T'. Since the parent of w in T has w'≠ w as a child, we also have that w is an internal vertex of both T and T'. Thus, T and T' have the same number of internal vertices. We obtain that G has a DFS tree T' with t vertices such that x is a leaf of T'. This concludes the proof. Now we are ready to proceed with the proof that G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. For the forward direction, let T be a DFS tree of G with t internal vertices. By Claim <ref>, we can assume that x is a leaf of T. Let T'=T-x. Because x is a leaf of T, T' is a DFS tree of G'. Let u be the parent of x in T. Because u is adjacent to x in G, we have that u is an internal vertex of T' by Claim <ref> (iii). This means that the number of internal vertices of T and T' is the same, that is, G' has a DFS tree with t vertices. For the opposite direction, let T' be a DFS tree of G' with t internal vertices with a root r. By Claim <ref> (ii), the neighbors of x in G are vertices of some root-to-leaf path in T'. Let v be the neighbor of x at maximum distance from r in T'. We construct T by making x a leaf with the parent v. Because x is adjacent in G only to v and some of its ancestors in T', T is a DFS tree. By Claim <ref>(iii), v is an internal vertex of T'. Therefore, T' and T have the same set of internal vertices. We obtain that G has a DFS tree with t vertices. This concludes the proof of our claim. Recall that G' was obtained from G by deleting a single unlabeled vertex x∈ I of degree at least two. Applying the claim that G has a DFS tree with exactly t internal vertices if and only if G'=G-x has a DFS tree with exactly t internal vertices inductively for unlabeled vertices of I of degree at least two, we obtain that Rule <ref> is safe. Denote now by G' the graph obtained from G by the application of Rules <ref> and <ref>. Because both rules are safe, for any integer t≥ 0, G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. Because of Rule <ref>, G'-S has at most 2s pendant vertices. Rule <ref> guarantees that G'-S has at most 2ss2=s^2(s-1) vertices of degree at least two. Then the total number of vertices of G' is at most s^2(s-1)+2s+s=s^2(s-1)+3s. It is straightforward to see that Rule <ref> can be applied in (sn) time and Rule <ref> can be applied in (s^2n) time. Therefore, the algorithm is polynomial. This concludes the proof. As a direct consequence of Lemma <ref> we obtain that Min-LLT and Max-LLT admit polynomial kernels when parameterized by the vertex cover number of the input graph. We are ready to prove our kernels parameterized by vertex cover. We show the theorem for Min-LLT; the arguments for Max-LLT are almost identical. Recall that the task of Min-LLT is to decide, given a graph G and an integer k≥ 0, whether G has a DFS tree with at most k leaves. Equivalently, we can ask whether G has a DFS tree with at least |V(G)|-k internal vertices. Let (G,k) be an instance of Min-LLT. We assume that G is connected as, otherwise, (G,k) is a no-instance and we can return a trivial no-instance of Min-LLT of constant size. First, we find a vertex cover S of G. For this, we apply a folklore approximation algorithm (see, e.g., <cit.>) that greedily finds an inclusion-maximal matching M in G and takes the set S of endpoints of the edges of M. It is well-known that |S|≤ 2τ. Then we apply the algorithm from Lemma <ref>. Let G' be the output graph. By Lemma <ref>, G' has (τ^3) vertices. We set k'=k-|V(G)|+|V(G')| and return the instance (G',k') of Min-LLT. Suppose that G has a DFS tree with at most k leaves. Then G has a DFS tree with t≥ |V(G)|-k internal vertices. By Lemma <ref>, G' also has a DFS tree with t internal vertices. Then G' has a DFS tree with |V(G')|-t≤ |V(G')|-(|V(G)|-k)=k' leaves. For the opposite direction, assume that G' has a DFS tree with at most k' leaves. Then G' has a DFS tree with t≥ |V(G')|-k'=|V(G)|-k internal vertices. By Lemma <ref>, G has a DFS tree with t internal vertices and, therefore, G has a DFS tree with at most k leaves. Because S can be constructed in linear time and the algorithm from Lemma <ref> is polynomial, the overall running time is polynomial. This concludes the proof. Now we demonstrate a polynomial kernel for Dual Min-LLT. Dual Min-LLT admits a kernel with (k^3) vertices. Recall that the task of Dual Min-DLL is to verify, given a graph G and an integer k≥ 0, whether G has a DFS tree with at most n-k leaves. Equivalently, the task is to check whether G has a DFS tree with at least k internal vertices. Let (G,k) be an instance of Dual Min-LLT. If G is disconnected, then (G,k) is a no-instance and we return a trivial no-instance of Dual Min-DLL of constant size. From now, we assume that G is connected. We select an arbitrary vertex r of G and run the DFS algorithm from this vertex. The algorithm produces a DFS tree T. Let S be the set of internal vertices of T. If |S|≥ k, then we conclude that (G,k) is a yes-instance. Then the kernelization algorithm returns a trivial yes-instance of Dual Min-LLT of constant size and stops. Assume that this is not the case and |S|≤ k-1. By Observation <ref>, we have that S is a vertex cover of G of size s≤ k-1. We use S to call the algorithm from Lemma <ref>. Let G' be a graph produced by the algorithm. By Lemma <ref>, G' has (k^3) vertices. Our kernelization algorithm returns (G',k) and stops. To see correctness, it is sufficient to observe that by Lemma <ref>, for any integer t≥ k, G has a DFS tree with t internal vertices if and only if G' has a DFS tree with t internal vertices. Because the DFS algorithm runs in linear time (see, e.g., <cit.>) and the algorithm from Lemma <ref> is polynomial, the overall running time is polynomial. This completes the proof. We use similar arguments to prove the following theorem in Appendix <ref>. Dual Max-LLT admits a kernel with (k^3) vertices. Theorems <ref> and <ref> implies Theorem <ref>. § FPT ALGORITHMS In this section, we give algorithms that solve Dual Min-LLT and Dual Max-LLT in FPT time using the kernels given in the previous section. Our algorithms are brute force algorithms which guess internal vertices. Recall that the standard DFS algorithm <cit.> outputs a labeled spanning tree. More formally, given an n-vertex graph and a root vertex r, the algorithm outputs a DFS tree T rooted in r and assigns to the vertices of G distinct labels d[v] from {1,…,n} giving the order in which the vertices were discovered by the algorithm. Thus, the algorithm outputs a linear ordering of vertices. Given an ordering v_1,…,v_n of V(G), we say that a DFS tree T respects the ordering if T is produced by the DFS algorithm in such a way that d[v_i]=i for every i∈{1,…,n}. Observe that for an ordering of the vertices of G, there is a unique way to run the DFS algorithm to obtain T respecting the ordering. This gives us the following observation. It can be decided in linear time, given an ordering v_1,…,v_n of the vertices of a graph G, whether G has a DFS tree respecting the ordering. Furthermore, if such a tree T exists, it is unique and can be constructed in linear time. Let G be a graph and let r∈ V(G). For a tree T⊆ G with r∈ V(T), we say that T is extendable to a DFS tree rooted in r, if there is a DFS tree T' of G rooted in r such that T is a subtree of T'. We call T' an extension of T. The definition of a DFS tree immediately gives us the following necessary and sufficient conditions for the extendability of T. Let G be a graph with r∈ V(G) and let T⊆ G be a tree containing r. Then T is extendable to a DFS tree rooted in r if and only if (i) T is a DFS tree rooted in r of G[V(T)], (ii) for every connected component C of G-V(T), the vertices of N_G(V(C)) are vertices of a root-to-leaf path of T. Note that (i) and (ii) can be verified in polynomial (in fact, linear) time. We need the following variants of Observation <ref> for special extensions in our algorithms. Let G be a graph with r∈ V(G) and let T⊆ G be a tree containing r. Then T is extendable to a DFS tree rooted in r with an extension T' such that the vertices of V(T) are internal vertices of T' if and only if (i) T is a DFS tree rooted in r of G[V(T)], (ii) for every connected component C of G-V(T), the vertices of N_G(V(C)) are vertices of a root-to-leaf path of T, (iii) for every leaf v of T, there is u∈ V(G)∖ V(T) that is adjacent to v. Let G be a graph with r∈ V(G) and let T⊆ G be a tree containing r. Then T is extendable to a DFS tree rooted in r with an extension T' such that the vertices of L=V(G)∖ V(T) are leaves of T' if and only if (i) T is a DFS tree rooted in r of G[V(T)], (ii) L is an independent set, (iii) for every v∈ L, the vertices of N_G(v) are vertices of a root-to-leaf path of T. Now, we are ready to describe our algorithms. For the proof of Lemma <ref>, see Appendix <ref>. Dual Min-LLT and Dual Max-LLT can be solved in n^(k) time. Combining Lemma <ref> and Theorem <ref> implies Theorem <ref> by providing k^(k)· n^(1) time algorithms for the dual problems. § CONCLUSION We have shown that Dual Min-LLT and Dual Max-LLT admit kernels with (k^3) vertices and can be solved in k^(k)· n^(1) time. A natural question is whether the problems have linear kernels, such as for k-Internal Spanning Tree <cit.>. Another question is whether the problems can be solved by single-exponential FPT algorithms. As a byproduct of our kernelization algorithms for Dual Min-LLT and Dual Max-LLT, we also proved that Min-LLT and Max-LLT admit polynomial kernels for the structural parameterization by the vertex cover number. It is natural to wonder whether polynomial kernels exist for other structural parameterizations. In particular, it could be interesting to consider the parameterization by the feedback vertex number, i.e., by the minimum size of a vertex set X such that G-X is a forest. §.§.§ Acknowledgements We acknowledge support from the Research Council of Norway grant “Parameterized Complexity for Practical Computing (PCPC)” (NFR, no. 274526) and “Beyond Worst-Case Analysis in Algorithms (BWCA)” (NFR, no. 314528). splncs04 § PROOF OF CLAIM <REF> IN THE PROOF OF LEMMA <REF> We show (i) by contradiction. Assume that there are u,v∈ N_G(x) such that the lowest common ancestor w of these vertices is distinct from u and v. Because x is not labeled by Rule <ref>, |W_uv|>2s. Hence, by Lemma <ref>, there is a vertex z∈ W_uv such that z is a leaf of T. However, any leaf in a DFS tree of T can be adjacent only to its ancestors in T. This contradiction proves the claim. We use exactly the same arguments to prove (ii) by replacing T by T' and observing that S is a vertex cover of G'. To show (iii), let T' be a DFS tree with a root r. By (ii), there is a leaf y such that the vertices of N_G(x) are vertices of the (r,y)-path in T'. Observe that y may be not unique. We prove that y∉ N_G(x). For the sake of contradiction, assume that x and y are adjacent. Because d_G(x)≥ 2, x has a neighbor u≠ x. Because x is not labeled by Rule <ref>, |W_uy|>2s. By Lemma <ref>, we obtain that there is v∈ W_uy that is a leaf of T'. We have that vy∈ E(G') but two leaves of a DFS tree cannot be adjacent; a contradiction. This proves that y∉ N_G(x) and concludes the proof of the claim. § PROOF OF THEOREM <REF> The aim of Dual Max-LLT is to decide, given a graph G and an integer k≥ 0, whether G has a DFS tree with at least n-k leaves. This is equivalent to asking whether G has a DFS tree with at most k internal vertices. Let (G,k) be an instance of Dual Max-LLT. If G is disconnected, then (G,k) is a no-instance, and we return a trivial no-instance of Dual Max-DLL of constant size. From now, we assume that G is connected. If T is a DFS tree, then the set of internal vertices of T is a vertex cover of G by Observation <ref>. Hence, if G has a DFS tree with at most k internal vertices, then τ(G)≤ k. We approximate τ(G) by selecting greedily an inclusion-maximal matching M in G (see, e.g., <cit.>). If |M|>k, then we conclude that τ(G)>k and return a trivial no-instance of Dual Max-DLL of constant size. Assume that this is not the case. Then we take S as the set of endpoints of the edges of M and observe that S is a vertex cover of size at most 2k. We call the algorithm from Lemma <ref> for G and S, which outputs a graph G' with (k^3) vertices. The kernelization algorithm returns the instance (G',k) of Dual Max-DLL and stops. To see the correctness, note that by Lemma <ref>, for any nonnegative integer t≤ k, G has a DFS tree with t internal vertices if and only if G' has a DFS tree with t internal vertices. Because M can be constructed in linear time and the algorithm from Lemma <ref> is polynomial, the overall running time is polynomial. This completes the proof. § PROOF OF LEMMA <REF> IN SECTION <REF> First, we give an algorithm for Dual Min-LLT. Let (G,k) be an instance of the problem. If G is disconnected, then (G,k) is a no-instance. Assume that this is not the case. Also, we have a trivial no-instance if n≤ k and we assume that n≥ k. Recall that the equivalent task of Dual Min-LLT is to decide, given a graph G and an integer k, whether G has a DFS tree with at least k internal vertices. We guess a set S of k internal vertices containing a root of a solution DFS tree T forming a subtree T'=T[S]. To guess T' and S, we apply Observation <ref> using the fact that T' should be a DFS tree of G[S]. Formally, we consider all k-tuples (v_1,…,v_k) of distinct vertices of G. For each k-tuple, we check whether there is a DFS tree T' of G[S], where S={v_1,…,v_k}, respecting the ordering v_1,…,v_k using Observation <ref>. If such a tree T' exists, we use Observation <ref> to check whether T' has an extension T such that the vertices of S are internal vertices of T. If we find such a k-tuple, we conclude that (G,k) is a yes-instance of Dual Min-LLT. Otherwise, if we fail to find T' and a required extension for all k-tuples, we conclude that (G,k) is a no-instance of Dual Min-LLT. The correctness of the algorithm immediately follows from Observations <ref> and <ref>. Because we have at most n^k k-tuples of vertices, we obtain that the overall running time is n^(k). We use a similar strategy for Dual Max-LLT. Recall that now the task is to decide whether a graph G has a DFS tree with at most k internal vertices. Let (G,k) be an instance of the problem. As above, we can assume that G is connected. Also, if n≤ k, then (G,k) is a yes-instance and we can assume that n>k. We guess a set S of k vertices containing a root and the internal vertices of a solution DFS tree T and a subtree T'=T[S]. For this, we consider all k-tuples (v_1,…,v_k) of distinct vertices of G. For each k-tuple, we check whether there is a DFS tree T' of G[S], where S={v_1,…,v_k}, respecting the ordering v_1,…,v_k using Observation <ref>. If such a tree T' exists, we use Observation <ref> to check whether T' has an extension T such that the vertices of V(G)∖ S are leaves of T. If we find such a k-tuple, we conclude that (G,k) is a yes-instance of Dual Max-LLT. Otherwise, if we fail to find T' and a required extension for all k-tuples, we conclude that (G,k) is a no-instance of Dual Max-LLT. Observations <ref> and <ref> imply correctness, and the overall running time is n^(k). This concludes the proof.
http://arxiv.org/abs/2306.03585v1
20230606110308
Selection principle for the Fleming-Viot process with drift $-1$
[ "Oliver Tough" ]
math.PR
[ "math.PR", "math.AP", "35C07, 35K57, 35Q92, 60J80, 60J85" ]
decorations.pathreplacing, patterns,shapes,snakes #1#2[#2] #1blue!70!blackOT #1 #1red!70!blackJN #1
http://arxiv.org/abs/2306.10118v1
20230616180412
Outshining by Recent Star Formation Prevents the Accurate Measurement of High-z Galaxy Stellar Masses
[ "Desika Narayanan", "Sidney Lower", "Paul Torrey", "Gabriel Brammer", "Weiguang Cui", "Romeel Dave", "Kartheik Iyer", "Qi Li", "Christopher Lovell", "Laura Sales", "Daniel P. Stark", "Federico Marinacci", "Mark Vogelsberger" ]
astro-ph.GA
[ "astro-ph.GA" ]
]Outshining by Recent Star Formation Prevents the Accurate Measurement of High-z Galaxy Stellar Masses Desika Narayanan [email protected] 0000-0002-7064-4309]Desika Narayanan Department of Astronomy, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 USA Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, København N, DK-2200, Denmark 0000-0003-4422-8595]Sidney Lower Department of Astronomy, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 USA 0000-0002-5653-0786]Paul Torrey Department of Astronomy, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 USA 0000-0003-2680-005X]Gabriel Brammer Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, København N, DK-2200, Denmark 0000-0002-2113-4863]Weiguang Cui Departamento de Física Teórica, M-8, Universidad Autónoma de Madrid, Cantoblanco 28049, Madrid, Spain Centro de Investigación Avanzada en Física Fundamental (CIAFF), Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid, Spain Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, United Kingdom 0000-0003-2842-9434]Romeel Davé Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK 7Department of Physics and Astronomy University of the Western Cape, Bellville, Cape Town 7535, South Africa 0000-0001-9298-3523]Kartheik G. Iyer Hubble Fellow Columbia Astrophysics Laboratory, Columbia University, 550 West 120th Street, New York, NY 10027, USA 0000-0001-8015-2298]Qi Li Max Planck Institute for Astrophysics, Garching bei Munchen, Germany 0000-0001-7964-5933]Christopher C. Lovell Institute of Cosmology and Gravitation, University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX, UK Astronomy Centre, University of Sussex, Falmer, Brighton BN1 9QH, UK 0000-0002-3790-720X]Laura V. Sales Department of Physics and Astronomy, University of California, Riverside, CA, 92521, USA 0000-0001-6106-5172]Daniel P. Stark Steward Observatory, University of Arizona, 933 N Cherry Ave, Tucson, AZ 85721, USA 0000-0003-3816-7028]Federico Marinacci Department of Physics & Astronomy “Augusto Righi", University of Bologna, via Gobetti 93/2, 40129 Bologna, Italy 0000-0001-8593-7692]Mark Vogelsberger Department of Physics, Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA In this Letter, we demonstrate that the inference of galaxy stellar masses via spectral energy distribution (SED) fitting techniques for galaxies formed in the first billion years after the Big Bang carries fundamental uncertainties owing to the loss of star formation history (SFH) information from the very first episodes of star formation in the integrated spectra of galaxies. While this early star formation can contribute substantially to the total stellar mass of high-redshift systems, ongoing star formation at the time of detection outshines the residual light from earlier bursts, hampering the determination of accurate stellar masses. As a result, order of magnitude uncertainties in stellar masses can be expected. We demonstrate this potential problem via direct numerical simulation of galaxy formation in a cosmological context. In detail, we carry out two cosmological simulations with significantly different stellar feedback models which span a significant range in star formation history burstiness. We compute the mock SEDs for these model galaxies at z=7 via 3D dust radiative transfer calculations, and then backwards fit these SEDs with prospector SED fitting software. The uncertainties in derived stellar masses that we find for z>7 galaxies motivate the development of new techniques and/or star formation history priors to model early Universe star formation. § INTRODUCTION The most common method for determining the stellar mass of a galaxy is through ultraviolet-near infrared spectral energy distribution (SED) modeling. This technique, first developed by <cit.>, <cit.>, and <cit.>, models the expected emission from stellar populations as they evolve over an assumed star formation history (SFH), with the emission reddened by a wavelength-dependent dust attenuation curve <cit.>. This powerful technique is foundational for our current observational understanding of the cosmic evolution of galaxy star formation rates and stellar masses <cit.>, and indeed a diverse range of methodologies for SED fitting have been explored in recent years <cit.>. The assumed form for the model star formation history in SED fitting software is an essential element in deriving galaxy stellar masses. Traditional functional forms for the SFH include constant, exponential declining, burst models, and combinations of these amongst others <cit.>. These parameterized forms for SFH (hereafter, “parametric” SFHs) have parameters describing the (for example) normalization, e-folding time and amplitude of bursts that are varied until a match is found between the synthetic SED produced by the SPS model and the observed data. When a solution is found, the model SFH is then used to infer the stellar mass of the observed galaxy (typically assuming a fixed metallicity). Of course, the assumed form of this SFH can severely impact the modeled stellar mass <cit.>. More recently, a number of codes have explored the impact of more flexible so-called “non-parametric” forms for the model SFH <cit.>. Non-parametric SFH models do not have an explicit functional form as the parametric models, but instead can vary the amplitude of the SFH over a number of redshift or time bins in the modeled history of the galaxy. <cit.> validated the usage of non-parametric SFHs constructed via Gaussian processes by ground-truthing these methods against the the Santa Cruz Semi-Analytic Model (SAM) and the mufasa cosmological simulation <cit.>. Similarly, <cit.> demonstrated the efficacy of non parametric SFH techniques by ground-truthing modeled mock SEDs from galaxy simulations against their true stellar masses. <cit.> found that while traditional parametric forms for the SFH in SED fitting software had uncertainties at the level ∼ 0.4 dex, non-parametric SFH models reduced these uncertainties to a level of ∼ 0.1 dex. <cit.> found that observed galaxies from the 3D-HST catalog <cit.> are systematically more massive and older when using non-parametric SFHs as compared to parametric methods, which bring the observed main sequence in line with theoretical predictions, potentially alleviating the long-standing tension between theory and observations in the SFR of galaxies at z ≈ 2 at a fixed stellar mass <cit.>. With the successful launch of the JWST in 2021, observations are characterizing the physical properties of galaxies at unprecedented redshifts <cit.>. Beyond providing constraints on the stellar mass buildup of some of the earliest galaxies in the Universe, JWST observations of high-redshift galaxies have been used to demonstrate potential tensions with the standard ΛCDM model <cit.>. What has yet to be explored, however, is the ability of traditional SED fitting methods to accurately derive stellar masses in these earliest galaxies. In this Letter, we use numerical simulations of galaxy formation, combined with post-processed radiative transfer and SED fitting software, to demonstrate that standard SED fitting techniques have a difficult time deriving correct stellar masses in high-z galaxies owing to outshining by recent bursts of star formation. We find that early bursts of star formation can contribute significantly to the total stellar mass of a galaxy. By the time the galaxy is detected at relatively late times (z ≈ 7-10), the current ongoing star formation outshines the light from evolved stars, making the integrated buildup of stellar mass difficult to measure. We demonstrate this effect using similar techniques to <cit.>'s study of the efficacy of non-parametric SFH modeling: we simulate the formation of galaxies in cosmological simulations, forward model their mock SEDs using dust radiative transfer, and then fit these mock SEDs using standard techniques. This allows us to compare to ground truth from the simulations, and assess the efficacy of the fitting procedure. In the remainder of this paper, we expand on these points. In  <ref>, we describe our numerical methods; in  <ref>, we describe the physical and luminous properties of the galaxies that we model; in  <ref>, we demonstrate the main issues when SED fitting very young galaxies; in  <ref>, we provide discussion, and in  <ref>, we summarize our main results. § METHODS §.§ Summary of Methods Our main goal is to create mock SEDs from galaxies formed in a cosmological simulation, and then fit those SEDs as an observer would in order to ground-truth SED fitting techniques for z>7 galaxies. To do this, we first simulate the high-redshift evolution of galaxies by conducting cosmological hydrodynamic galaxy evolution simulations. We then “forward model” the emission from these galaxies by coupling them with dust radiative transfer in order to generate their mock SEDs. With these SEDs in hand, we then “backward model” them in order to derive the inferred galaxy physical properties. Through this methodology, we determine the relationship between the inferred stellar masses of our modeled galaxies from their SEDs, and the true stellar mass. §.§ Galaxy Formation Simulations We employ two rather different cosmological galaxy formation models in order to ensure the robustness of our results. The primary differences in these models as far as this study is concerned are: * The stellar feedback model, which impacts the burstiness of the modeled star formation histories. * The dust model, which impacts the dust content, extinction, and attenuation of the emergent light. In detail, we employ the simba cosmological simulation, as already run by <cit.>, as well as a newly run simulation employing the smuggle explicit stellar feedback model within the arepo code base. We briefly describe these models in detail in turn below. The simba simulation is based on the gizmo cosmological gravity plus hydrodynamic solver <cit.>, and evolves dark matter and gas elements together, including gravity and pressure forces. Gas cools radiatively using the grackle library <cit.>, including both metal line cooling, as well as non-equilibrium evolution of primordial elements. Stars form in molecular H_2 gas following the <cit.> subresolution prescription for determining the HI and H_2 content in a gas particle, as well as the <cit.> prescription for the star formation rate. Here, a star formation efficiency is manually set to ϵ_* = 0.02. The gas itself is artificially pressurized in order to resolve the Jeans mass as described in <cit.>. The upshot of this is that the star formation history for a given galaxy tends to occur more smoothly than in models with an explicit feedback model <cit.>. Dust is modeled within simba following the algorithms outlined in <cit.>. Specifically, dust is included as a single sized particle, though can comprise of multiple species (graphites and silicates). Dust is formed in evolved stars <cit.>, can grow via metal accretion <cit.>, and can be destroyed via thermal sputtering in hot gas <cit.>, supernova blastwaves, and astration in star-forming regions. In addition to the simba simulation, we have run simulation with the smuggle galaxy formation model enabled in the arepo code <cit.>. We refer the reader to <cit.> for a full description of this model, and highlight only the key differences from the simba simulation as they pertain to our study. Star formation occurs only in gravitationally bound molecular gas <cit.>, and follows a volumetric <cit.> law, though with an efficiency ϵ_* = 1. This is in contrast to the forced inefficiency of star formation in simba because on long timescales, stellar feedback in the smuggle model self-regulates the star formation rates to result in the relatively low efficiencies observed in molecular clouds <cit.>. Stellar feedback models include supernovae, radiative feedback, stellar winds, and thermal feedback from HII regions. Like simba, dust is also included in the smuggle model. In contrast, however, dust is modeled to include a spectrum of grain sizes that evolve as the dust grains evolve in the simulation. Beyond the aforementioned dust processes that are included in simba, the dust in smuggle is allowed to grow in size via coagulation (sticking together), as well as fragment into smaller grains via grain-grain shattering collisional processes. The upshot from this dust modeling is that the local extinction law is explicitly computed in a spatially resolved sense in galaxies <cit.>, and represents therefore a fundamental difference in the forward modeling of radiative transfer between the smuggle and simba models. We have simulated a 25/h Mpc side-length box with periodic boundary conditions, starting from z=99 with initial conditions generated with music <cit.>. We have run the model at the same mass resolution as the simba simulation (2 × 512^3 particles), though the initial conditions are generated with different random seeds, so the galaxies are not directly mappable from one simulation to another. We allow the simulation to evolve to redshift z=6, and restrict our analysis to this redshift range. §.§ Dust Radiative Transfer In order to generate the mock SEDs (that we will then fit using prospector), we employ the public powderday dust radiative transfer package <cit.>, which employs yt, fsps and hyperion for grid generation, stellar population synthesis, and Monte Carlo radiative transfer respectively <cit.>. Here, stars that form in the galaxy formation simulation emit a stellar spectrum based on their ages and metallicities. This spectrum is computed using fsps <cit.>. We assume the mist stellar isochrones <cit.>, and a <cit.> stellar initial mass function (consistent with both sets of hydrodynamic simulations). We note that the choices of these parameters can subtly impact our results. For example, different isochrone models can change the assumed lifetimes of massive stars, which will impact the degree of outshining that we model here. The light from these stars[We note that HII regions around massive stars could generate nebular emission lines in the UV/optical, which can possibly impact broadband photometry <cit.>. Indeed, some work has already been done to include nebular lines in powderday <cit.>, and this emission (once computed) is simply tacked onto the stellar SEDs for the dust radiative transfer phase. This said, we aim to isolate the uncertainties incurred by the SED fits in our simulations to the star formation histories, and therefore do not include this additional physics. Future work will investigate the role of nebular emission in contaminating broad band fluxes in galaxies. ] is emitted in an isotropic manner, and can be absorbed, scattered, and re-emitted by dust in the individual cells in the galaxy. For the simba simulations, which are particle based, the dust information is smoothed from the particles onto an adaptive mesh with an octree memory structure, and the radiative transfer occurs on this grid. For simba we assume <cit.> dust extinction laws locally. For smuggle, we perform the radiative transfer on a Voronoi mesh built around the dust particles simulated with the active dust model, and compute the extinction laws explicitly in each cell following <cit.>. Here, we assume extinction efficiencies from <cit.> and <cit.> for silicates and carbonaceous grains respectively. Beyond attenuation by the diffuse dust in the galaxy (which is explicitly modeled in both simba and smuggle), powderday includes the possibility of obscuration by subresolution birth clouds following the <cit.> formalism as built into fsps. Here, the attenuation only occurs for star particles below a threshold age (we set this to t=10 Myr when included), and has a user-defined normalization to the attenuation curve, that we set as A_ V=1. We discuss this in further detail in  <ref>. § THE PHYSICAL AND LUMINOUS PROPERTIES OF Z>7 GALAXIES §.§ Stellar Masses and Star Formation Histories We begin our analysis by describing the physical and luminous properties of our model galaxies. In Figure <ref>, we plot a distribution of the stellar masses at z=7 of the 20 most massive galaxies in the simba and arepo simulations. In a (25/h)^3 volume, the most massive galaxies at z ≈ 7.5 are M_* ∼ 10^8-10^9 M_⊙. These stellar masses are, by and large, built up via a series of individual star formation episodes, though the importance of these bursts to the total stellar mass buildup (and, in general, the shape of the SFH) is dependent on the assumed stellar feedback model <cit.>. In Figure <ref>, we show the SFHs for 5 arbitrarily chosen galaxies from each simulation. The SFH is constructed from the z≈7 snapshot, using the z=7 stellar ages, correcting for mass loss processes. The simba model with a manifestly pressurized ISM model broadly has a smoother rising star formation history, with punctuated bursts superposed. In comparison, the smuggle stellar feedback model is dominated by individual bursts at these high redshifts, owing to the explicit nature of the feedback model. The two models bracket the smoother SFHs typically seen in simulations with a pressurized ISM (e.g. eagle, illustris, and simba), and the burstier SFHs seen in explicit feedback models (e.g. fire and smuggle). It is the ability to reconstruct these SFHs with reasonably high fidelity (or lack thereof) that will prove essential for the SED fitting software to accurately derive the stellar masses of galaxies in this epoch. §.§ Model SEDs In the top row of Figure <ref>, we show the model SEDs for the same arbitrarily chosen galaxies in Figure <ref> at z=7.5. As a reminder (c.f.  <ref>), the simba model actively models the dust content of the galaxies on-the-fly in the simulation but does not model the grain size distribution. As a result, the interstellar extinction curves are assumed to be those of <cit.>. In contrast, the smuggle model includes as grain size distribution as well, and therefore the radiative transfer used to generate the mock SEDs in Figure <ref> includes the spatially varying dust extinction curves. The simba model uses a Milky Way template for PAH emission (scaled for the local energy deposited), while the smuggle model uses the <cit.> model, based on the local grain populations <cit.>, resulting in fairly different mid-IR emission features. Therefore, as in modeling the SFHs, the two dust models that we include here bracket a reasonable range of modeled dust extinction and obscuration. In the bottom row of Figure <ref>, we show the same SEDs, but this time include <cit.> “birthclouds” around young stars. These birthclouds are included in a subresolution fashion such that any star particle with age t_ age < 10^7 yr experiences an extra attenuation of A_ V=1. We show observational comparisons from the <cit.>[Note that the data presented here, formally, are the beagle SED fits to the data.] and <cit.> surveys in the top and bottom, respectively (these are split in the top and bottom for clarity). Two immediate points are clear from Figure <ref>. First, the model SEDs that we present here provide a reasonable match to observations[Noting, of course, the lack of emission lines in our models. We reiterate that this is intentional so as to isolate the uncertainties modeled here to SFH modeling.], including SEDs that have relatively blue and relatively red optical spectra. This allows us to proceed in our analysis with reasonable confidence in our methods. Second, the observations exhibit a wide range of rest-frame optical colors, with (as a general statement) the <cit.> sources being redder than the <cit.> galaxies. When comparing against our models, it is immediately evident that the majority of reddening in z>7 galaxies is due to local obscuration at the sites of very young stars; diffuse dust in the ISM is insufficient to provide the required reddening to match the observed NIRCAM photometry for the reddest sources. This latter point is a net win for galaxy SED fitting: many modern SED fitting codes have the ability to include such clouds in their backwards modeling. This reduces the uncertainties incurred by the diverse shapes of diffuse ISM attenuation curves <cit.>. § RECOVERING THE STELLAR MASSES OF REDSHIFT > 7 GALAXIES VIA SED FITTING Having established the nature of early universe star formation histories (at least within the context of two reasonably plausible galaxy formation models), as well as the forward modeled SEDs from these galaxies, we now ask how accurately we can recover the stellar masses of these model galaxies via SED fitting the mock SEDs. We fit these SEDs with prospector <cit.>, assuming full JWST NIRCAM and MIRI coverage of the rest-frame UV-NIR SEDs (noting this is assuming a relatively optimistic wavelength coverage). By using prospector for the backwards modeling, we are able to minimize the uncertainty incurred by many of the stellar population parameter choices as fundamentally prospector and powderday both use fsps under the hood to model stellar populations. We therefore assume the exact same model IMF, spectral libraries, and stellar isochrone models in order to obviate these potential uncertainties in our SED modeling. We additionally fix the redshift to the true redshift of the galaxy to avoid uncertainties in the redshift fit. We assume a uniform (and arbitrarily chosen) signal to noise ratio of 10 across all bands. We allow the birthcloud model in the SED fits to be flexible. We assume the non-parametric form for star formation history modeling where the star formation histories are constrained with a set of 3 model priors: * Dirichlet prior <cit.>, parameterized by concentration index α that sets the preference for all stellar mass to be formed in a single bin vs a smoother distribution of stellar mass formed over the modeled time period. We have run tests ranging α=[0.3,1.0] and found minimal impacts on our results. * Rising SFH prior developed by <cit.> to preferentially favor rising star formation histories. * PSB prior developed by <cit.> for post starburst galaxies which have sharp changes in their recent SFHs. These SFH priors are not meant to be comprehensive, but rather to span a range of reasonable priors commonly used in the literature, and to demonstrate the potential impact of these priors on the derivation of galaxy stellar masses in the early Universe. In Figure <ref>, we show the example fit for one of our model SEDs, with the observational filters overlaid (we zoom in to the NIRCAM/MIRI wavelengths). The blue shaded region denotes the 16-84% percentile confidence intervals in the posterior, while the black line shows the input powderday mock SED. The fit was performed at z≈7. The quality of fit presented in Figure <ref> is comparable to all of the fits performed for this study. In Figure <ref>, we show the model SFH for an arbitrarily chosen galaxy from both the simba and smuggle simulations with the best fit star formation history as derived from the prospector fits with each of these priors imposed. In the left column we show the results for an arbitrarily chosen galaxy from the simba simulation, while in the right we show the results from the smuggle model. The orange lines show the median best fit SFH from prospector, while the shaded region shows the inter-quartile dispersion. As is clear none of the imposed SFH priors adequately reproduces the early Universe SFH in either galaxy formation model, though generally the <cit.> model that allows for the most dramatic star SFH variations performs the best. We show the impact of these SED fits on the derived stellar masses in Figure <ref>, where we compare the SED fit M_* to the true M_* for each galaxy, with panels ordered akin to Figure <ref>. We color code the galaxies with a measure of their SFH `burstiness', here parameterized as the standard deviation of the SFR over the history of the Universe. We additionally provide summary statistics for each model via the uncertainty (quantified as the standard deviation of M_*(fit)/M_*(true)), and the bias (quantified as the median of the same ratio). Smaller uncertainties, and biases that are closer to unity demonstrate higher accuracy in the derived stellar masses. Generally, no model performs particularly well, with uncertainties including both over-estimates and under-estimates. The lack of early star formation in the SED fits owes to `outshining': the most recent burst of star formation dominates the SED, and therefore has a significantly outsized impact on the resulting fit. The SFH priors that allow for the most dramatic variations in the SFH quantitatively perform the best (bottom row of Figure <ref>), though still do not fully capture the very recent star formation variability. Whether an SED fit over-predicts or under-predicts the true stellar mass depends in large part on the amount of early star formation that the SED fit predicts. In Figure <ref>, we show the corner plot for an arbitrarily chosen simba galaxy using the Dirichlet prior. There are significant uncertainties in the derived physical properties as well as co-variances between them, specifically between stellar mass and SFR and the dust attenuation parameters. These degenerate solutions make inferring the true galaxy properties difficult when the available data does not have enough constraining power. We caution that these results are particular to the bands employed here (NIRCAM+MIRI), as well as the particulars of the SFHs modeled, and are therefore intended to demonstrate the range of uncertainty rather than the specific direction of uncertainty in M_* estimates at high-z. The take-away from this analysis is not that a particular SFH prior tends to under predict or over predict stellar masses at high-redshift, but rather a demonstration of the relative level of uncertainty in fitting the SEDs of galaxies whose stellar masses are dominated by early star formation that is being outshined by current star formation. § DISCUSSION §.§ General Discussion We have, thus far, seen that when observing galaxies at high redshift (here, we model z 7), SED fitting techniques have difficulty in correctly inferring the stellar masses of galaxies owing to the substantial contribution of stellar mass build up by individual star formation episodes at earlier times. This is true for a range of star formation histories, ranging from the smoother SFHs seen in traditional cosmological simulations to more bursty SFHs in explicit feedback models. The fundamental issue is that current star formation at the time of detection is able to outshine the prior stellar mass buildup, making it difficult for SED fitting software to infer its presence. This situation will be most extreme for very bursty systems, though at least within the context of the range of models explored here, appears to be a generic problem. Whether SFHs at high-z are bursty in the early Universe remains an open question, though recent observations and models appear to suggest at least some level of burstiness <cit.>. As these systems evolve to lower redshift, it is expected that traditional SED fitting techniques will perform substantially better as the ratio of flux from current star formation to that from the integrated stellar mass buildup decreases. Indeed, this was demonstrated explicitly by <cit.>, who performed experiments similar to those presented in this paper for z=0 model galaxies in the simba simulation, and found that SED fitting with non-parametric SFHs models could accurately determine galaxy stellar masses. This said, we note some caution when interpreting the results presented here. The simulations here encompass two different models for stellar feedback that result in varied SFHs, for galaxies drawn from a relatively small box (25/h) Mpc on a side). While these models span a diverse range of SFHs for similar mass galaxies, they are not necessarily comprehensive. It is possible that some model forms of star formation history (i.e. where most of the mass is formed at the time of observation) could result in highly accurate derivations of the stellar mass. Similarly, the modest size of our cosmological boxes excludes the most massive and rare systems. These results should not be taken to mean that a particular SFH prior will always overpredict, or always underpredict stellar masses: the exact relationship between the modeled stellar masses of galaxies and true ones depend on a wide range of choices in SED modeling <cit.>. Instead, these results are to simply intended to reflect the uncertainty associated with SED modeling early Universe galaxies. §.§ Relationship to other Studies While we have explicitly demonstrated the issues of deriving stellar masses in high-z galaxies via direct numerical simulation, this issue has been hypothesized in the observational literature in a wide range of contexts. For example, <cit.> studied the stellar populations of 11 galaxies from 9<z<11, and found that the inferred stellar ages were significantly impacted by the assumed SFH prior, and noted that multiple priors were able to fit the data equivalently. Similarly, <cit.> modeled the stellar masses of UV selected galaxies at z ∼ 7-8, and found that the derived stellar masses can be uncertain by up to an order of magnitude owing to the outshining of older stellar populations by a current burst. <cit.> modeled the stellar ages of UV-bright z∼ 7 galaxies, and found a potential tension between the relatively young inferred stellar ages of their sample, and the detection rate of higher-redshift sources by JWST. This tension can be alleviated if the redshift z ∼ 7 galaxies have older stellar components formed in earlier bursts, as in the simulations presented here. <cit.> fit spatially resolved measurements of 5<z<9 galaxies in the JWST SMACS 0723 field with bagpipes <cit.>, and found evidence for both a bursty SFH, as well as potential for an older stellar population (outshined by current star formation). When taking this outshining into account, found reduced inferred stellar masses by 0.5-1 dex. Finally, <cit.> note, in their development of the dense-basis methodology for non-parametric SFH reconstruction from observed galaxy SEDs, that the ability to model older stellar populations is prior-dominated rather than likelihood-dominated, and that sharp variations in the SFH may be difficult to infer. §.§ Possible Ways Forward The primary outcome of this Letter is to demonstrate the uncertainty of the measurement of high-z stellar masses. We advocate for investment by the community in the development of methods to reduce the bias and uncertainty in these measurements. One possibility is through the development of new star formation history priors in SED fitting codes. Already we have seen in Figure <ref> that priors that favor rapid transitions in star formation history perform reasonably well. In a similar vein, including information from spectral line features that trace star formation over different time scales may help to quantify the burstiness, and inform the modeling of the SFH <cit.>. As an alternative to traditional SED fitting techniques, machine learning methods may hold promise. <cit.> demonstrated the efficacy of using mock SEDs from large cosmological simulations as a training set for machine learning-based SED fitting software. <cit.> demonstrated that in some circumstances, these techniques can far outperform traditional SED fitting. § SUMMARY AND OUTLOOK In this paper, we have employed cosmological galaxy evolution simulations in order to investigate the ability for modern SED fitting techniques to recover the stellar masses of high-redshift galaxies. Owing to the relatively young ages of z ≳ 7 galaxies, early bursts of star formation constitute a significant fraction of the formed mass at the time of SED modeling. As a result, if the stellar light from this early star formation (c.f. Figure <ref>) is outshined by late-time star formation at the time of detection, then the recovered stellar masses can be incorrect by nearly an order of magnitude (Figure <ref>). The impact of these uncertainties will decrease at lower redshifts, as the ratio of the light emitted from star formation at the time of detection to the the light from the underlying stellar mass decreases. As JWST pushes the frontier of high-redshift science to increasingly early times, we encourage the development of new techniques in order to accurately derive the physical properties of these first galaxies. § ACKNOWLEDGMENTS DN expresses gratitude to Adriano Fontana, Paola Santini, and the organizors of “The Growth of Galaxies in the Early Universe - VIII”, where the idea for this paper was borne out of discussions at the Bad Moos. DN additionally thanks the Aspen Center for Physics which is supported by National Science Foundation grant PHY-1607611, which is where the original framework for the powderday code base was developed. DN thanks Mike Boylan-Kolchin, Chris Hayward and Chia-Yu Hu for helpful conversations, as well as Ryan Endsley for providing data from <cit.> for comparison against our models. DN and PT were supported by NASA ATP grant 80NSSC22K0716. WC is supported by the STFC AGP Grant ST/V000594/1 and the Atracción de Talento Contract no. 2020-T1/TIC-19882 granted by the Comunidad de Madrid in Spain. He also thanks the Ministerio de Ciencia e Innovación (Spain) for financial support under Project grant PID2021-122603NB-C21 and ERC: HORIZON-TMA-MSCA-SE for supporting the LACEGAL-III project with grant number 101086388. The Cosmic Dawn Center is funded by the Danish National Research Foundation (DNRF) under grant #140 mnras
http://arxiv.org/abs/2306.02895v1
20230605140453
Evading Black-box Classifiers Without Breaking Eggs
[ "Edoardo Debenedetti", "Nicholas Carlini", "Florian Tramèr" ]
cs.CR
[ "cs.CR", "cs.LG", "stat.ML" ]
Prebiosignature Molecules Can Be Detected in Temperate Exoplanet Atmospheres with JWST O. Shorttle July 31, 2023 ====================================================================================== Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out “bad” data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as “bad” come at a higher cost because they trigger additional security filters, e.g., usage throttling or account suspension. Yet, we find that existing decision-based attacks issue a large number of “bad” queries, which likely renders them ineffective against security-critical systems. We then design new attacks that reduce the number of bad queries by 1.5–7.3×, but often at a significant increase in total (non-bad) queries. We thus pose it as an open problem to build black-box attacks that are more effective under realistic cost metrics[Code to reproduce our experiments: <https://github.com/ethz-privsec/realistic-adv-examples/>]. § INTRODUCTION Adversarial examples <cit.> are a security risk for machine learning (ML) models that interact with malicious actors. For example, an attacker could use adversarial examples to post undesired content to the Web while bypassing ML filtering mechanisms <cit.>. In such security-critical uses of ML, the attacker often only has black-box access to the ML model's decisions. Decision-based attacks <cit.> generate adversarial examples in black-box settings by repeatedly querying the model and observing only the output decision on perturbed inputs. The original of <cit.> required over 100,000 model queries to reliably find small adversarial perturbations. Subsequent work <cit.> has optimized for this metric of “total number of model queries”, and reduced it by 1–3 orders of magnitude. We argue this metric fails to reflect the true cost of querying a security-critical ML system. Such systems typically aim to detect “bad” data, such as malware, harmful content or malicious traffic. Queries with benign data (e.g., a selfie uploaded to social media) carry little cost; in contrast, bad data flagged by the system (e.g., offensive content) triggers additional security measures that carry a high cost for the attacker—up to account termination. Thus, we argue that black-box attacks should strive to be stealthy, by minimizing the number of “bad” queries that are flagged by the ML system. We find that existing attacks are not stealthy: over 50% of the queries they make are bad. We then show how to drastically reduce the number of bad queries for a class of attacks that measure distances to the model's boundary along random directions (e.g.,  <cit.>,  <cit.> and  <cit.>). Inspired by the famous “egg-dropping problem” <cit.> , we design variants of these attacks that trade-off bad queries for benign ones. We evaluate our attacks on three classification tasks: ImageNet, dog vs. not-dog in ImageNet, and NSFW content <cit.>. Our stealthy attacks reduce the number of bad queries of the original attacks by 1.5–7.3×. Notably, on ImageNet, our stealthy variant of the attack outperforms and in terms of bad queries, despite the two latter attacks issuing fewer queries in total. Yet, our most stealthy ℓ_2 attacks incur a large increase in benign queries (350–1,400×). The tradeoff is better for ℓ_∞ attacks: our stealthy variant of the attack reduces bad queries by 2.1–2.5× over and 6–17× over , while making 2.1–3.4× more benign queries than . We use the stealthy attack to evade a commercial black-box NSFW image detector, with 2.2× fewer bad queries than the original attack. Overall, our results suggest that many decision-based attacks are far from stealthy, and that stealthier attacks are often only viable if the cost of bad queries far outweighs that of good queries (especially for ℓ_2 attacks). We thus recommend that future decision-based attacks account for asymmetric query costs, to better reflect the true cost of deploying such attacks against real security-critical systems. § DECISION-BASED ATTACKS Given a classifier f: [0,1]^d →𝒴 and input (x,y), an (untargeted) adversarial example x̂ is an input close to x that is misclassified, i.e., f(x̂) ≠ y and x̂- x_p ≤ϵ for some ℓ_p norm and threshold ϵ. A decision-based attack gets oracle access to the model f. The attacker can query the model on arbitrary inputs x ∈ [0,1]^d to obtain the class label y f(x). Existing decision-based attacks aim to minimize the total number of queries made to the model f before the attack succeeds. Applications.   Decision-based attacks <cit.> were designed for black-box ML systems that only return model decisions (e.g., an ML model that filters social media content). Such attacks are also applicable when an attacker has physical access to a model guarded by hardware protections, e.g, a phone's authentication mechanism, or a self-driving system. Decision-based attacks are also commonly used to evaluate the robustness of white-box models, when computing gradients is hard <cit.>. In this paper we are interested in the first two scenarios, where decision-based attacks are used against black-box ML security systems. In particular, we assume that these security systems monitor and log user queries, and can throttle or disable an attacker's access to the system. § ASYMMETRIC QUERY COSTS Existing decision-based attacks optimize for the total number of model queries. This is reasonable if the attacker's primary cost is incurred by queries to the model, and this cost is uniform across queries (e.g., if the attacker has to pay a fixed service fee for each query). But we argue that query costs are rarely uniform in practical security-critical systems. This is because in such systems, the goal of a ML model is usually to detect “bad” data (e.g., malware, harmful content, malicious traffic, etc). The costs incurred by querying such a model are highly asymmetric. Querying the model with “good” data is expected, comes with no additional overhead, and is thus cheap. Whereas querying the model with “bad” data is unexpected, triggers additional security measures and filters, and thus places a much higher cost on the attacker. As an example, consider an attacker who tries to upload inappropriate content to a social media website. Every uploaded image passes through a ML model that flags inappropriate content. Benign content is very rarely flagged and thus carries little cost. But if a query is flagged as inappropriate, the system blocks the contents and may take further costly actions (e.g., account throttling or suspension). We now formalize this asymmetry. Assume one or more of the classifier's output classes 𝒴 are “bad”, denoted as 𝒴_bad. The attacker is given an input (x,y) where y ∈𝒴_bad (e.g., x is a NSFW image) and wants to find an adversarial example x̂ where f(x̂) ∉𝒴_bad. All queries to the model f carry a base cost c_0, due to data processing, network bandwidth, or disk storage, or throttling if the attacker makes too many queries. This base cost is typically very low: e.g., Facebook users can upload 1,000 images at once in an album <cit.>. However, for queries x that are flagged as inappropriate (i.e., f(x) ∈𝒴_bad), the cost c_bad incurred by the attacker is much larger. Their account could be suspended or banned, their IP blacklisted, etc. While these restrictions can be circumvented (e.g., by buying multiple accounts <cit.>), this places a significantly higher cost on queries flagged as bad, i.e., c_bad≫ c_0. We thus argue that decision-based attacks should strive to minimize the following cost: minimize Q_total· c_0 + Q_bad· c_bad , where Q_bad is the number of bad model queries (f(x) ∈𝒴_bad), Q_total is the total number of queries—including bad ones—and c_bad≫ c_0. We call attacks that minimize this asymmetric cost stealthy. Existing attacks are not stealthy. No existing black-box attack considers such asymmetric query costs. As a result, these attacks issue a large number of bad queries. We illustrate this with an untargeted attack on ImageNet.[ImageNet is not a security-critical task, and thus most content is not “bad”. We use ImageNet here because prior attacks were designed to work well on it. To mimic a security-critical evasion attempt, we set the class to be evaded as “bad” and all other classes as “good”. That is, for an input (x, y) we set 𝒴_bad = {y} and the attacker's goal is to find an adversarial example x̂ such that f(x̂) ≠ y, while avoiding making queries labeled as y.] In <Ref>, we show the number of total queries Q_total and bad queries Q_bad made by various ℓ_2 and ℓ_∞ decision-based attacks on a ResNet-50 classifier. In all cases, half or more of the attacker's queries are “bad” (i.e., they get the class label that was to be evaded). Despite differences in the fraction of bad queries for each attack, attacks that make fewer total queries also make fewer bad queries. But this begs the question of whether we could design attacks that issue far fewer bad queries in total. The remainder of this paper answers this question. Selecting the values of c_0 and c_bad. The true cost of a query (whether good or bad) may be hard to estimate, and can vary between applications. As a result, we recommend that black-box attack evaluations report both the value of Q_total and Q_bad, so that the attack cost can be calculated for any domain-specific values of c_0 and c_bad. In this paper, we often make the simplifying assumption that c_0=0, c_bad=1, a special case that approximates the attack cost when c_bad≫ c_0. In this special case, the attacker solely aims to minimize bad queries, possibly at the expense of a large increase in total queries. We will however also consider less extreme trade-offs between these two values. § DESIGNING STEALTHY DECISION-BASED ATTACKS To begin, we explore the design space of stealthy decision-based attacks, which minimize the total number of bad queries made to the model. One possibility is simply to design a better decision-based attack, that makes fewer total queries. As we see from <Ref>, this is how prior work has implicitly minimized asymmetric attack costs so far. We take a different approach, and design attacks that explicitly trade-off bad queries for good ones. §.§ How do Decision-based Attacks Work? Most decision-based attacks follow the same blueprint <cit.>. For an input (x, y), the attacks first pick an adversarial direction θ∈ [0, 1]^d and find the ℓ_p distance to the model's decision boundary from x along the direction θ. They then iteratively perturb θ to minimize the boundary distance along the new direction. In each iteration, the attacks compute an update direction δ and step-size α and make an update step θ' θ + α·δ, and then compute the new boundary distance from x along θ'. Decision-based attacks use two fundamental subroutines: * (x, θ, p) →^+: this routine computes the distance (in ℓ_p norm) from x to the decision boundary along the direction θ. Most attacks do this by performing a binary search between x and a misclassified point x̂ in the direction θ, up to some numerical tolerance η. * (x, θ', , y) →{-1, 1}: this routine uses a single query to check if the point at distance in direction θ' is misclassified, i.e., it returns 1 if f(x + ·θ'θ'_p) ≠ y. Different attacks combine these two subroutines in different ways, as described below. As we will see, how an attack balances these two routines largely impacts how stealthy the attack can be made. An overview of existing attacks. We briefly review how different attacks make use of and routines. A more detailed explanation is in <Ref>.  <cit.> is an ℓ_2 attack that only calls the routine. It starts with a direction θ from the input x to a misclassified point x̂. In each iteration, it samples an update direction δ and checks whether it reduces the boundary distance (with one call to ). The attack estimates the new distance by calling (x, θ+r_i, , y) for n random directions r_i. is a greedy ℓ_∞ attack. It starts with a signed adversarial direction θ=[1, …, 1]. In each iteration, it flips the sign of some entries in θ, and calls to check if this still yields an adversarial point at distance . If so, it calls to compute the improved boundary distance. and find the current adversarial distance by calling . Then, they call (x, θ+r_i, , y) for n random directions r_i to approximate a gradient δ (n is an attack hyper-parameter). Both attacks compute the step-size α with a geometric search, which can be implemented by calling once (for ), or m≥ 1 times (for ). is the same as , with a more precise gradient estimation. It estimates a gradient by computing distances from x to the decision boundary along directions θ+r_i for n random vectors r_i. Compared to and , thus issues n calls to instead of to obtain a more precise estimate of the gradient at the expense of more model queries. <Ref> in the appendix summarizes the calls made to and by each attack. In <Ref>, we show how many bad queries and total queries are used for both routines in an untargeted attack for a standard ResNet-50 on ImageNet (where we view the class to be evaded as “bad”). §.§ Maximizing Information per Bad Query To design stealthy decision-based attacks, we first introduce the entropy-per-bad-query metric. This is the information (measured in bits) that the attacker learns for every bad query made to the model. Consider an attack that calls (x, θ+r_i, , y) for many random r_i. , and do this to estimate the shape of the decision boundary. For a locally linear boundary, we expect 50% of such queries to be bad. The attacker thus learns two bits of information per bad query. To increase the entropy-per-bad-query, we would need to sample the r_i so that fewer queries are bad. But this requires a prior on the boundary's geometry, which is what these queries aim to learn. It thus seems hard to make this procedure stealthier. For calls to , a standard binary search requires log1η queries (half of which are bad) to estimate the boundary distance up to tolerance η. A call to thus gives log1η bits of information. So the attacker also learns an average of two bits per bad query. However, here there is a simple way to trade-off bad queries for good ones, which lets the attacker learn the same log1η bits of information with as little as one bad query. All that is required is a tall building, and some eggs! Measuring distances with one bad query. In the famous “egg-dropping problem”, there is a building of N floors, and you need to find the highest floor n ∈ [1, N] from which an egg can be dropped without breaking. The egg breaks if and only if dropped from above some unknown floor n. In the simplest version of the problem, you have a single egg and must compute the value of n. The solution is to drop the egg from each floor consecutively starting from the first, until it breaks. We note that finding the decision boundary between x and x̂, while minimizing bad queries, is exactly the egg-dropping problem! Assuming x-x̂_p = 1, a search tolerance of η yields a “building” of N=1η “floors” of length η from x̂ to x. The first n floors (up to the boundary) are good queries, i.e., no broken egg. All floors above n are bad queries on the wrong boundary side, i.e., a broken egg. While a binary search minimizes the total number of queries for finding the boundary, a line-search—which moves from x̂ to x until the boundary is hit—is optimal for minimizing bad queries. Many attacks use a small search tolerance η (on the order of 10^-3), so a full line-search incurs a large cost of good queries (1η). We thus consider finer-grained methods to trade-off bad and good queries. r0.4 [width=0.38]figures/fig_linesearch Line-search strategies to find the boundary (in gray) between a benign input (green) and the original bad input (brown). Red crosses are bad queries. Trading good and bad queries. In the general version of the egg-dropping problem, you are given k≥ 1 eggs to find the safe height n with a minimal number of egg drops. Asymptotically, you need Θ(N^1k) egg drops given k eggs, as we now show for k=2 eggs: first, divide the N floors into √(N) groups of √(N) floors and do a coarse-grained line-search by dropping from floors 1, 1+√(N), 1+2√(N), … until the first egg breaks. You now know the solution is in the previous group of √(N) floors, so you do a fine-grained line-search in this group one floor at a time. This requires at most 2√(N) egg drops. For our boundary finding problem, we can thus divide the interval between x and x̂ into 1η intervals, and do two line searches with step-sizes respectively √(η) and η. This will incur two bad queries, and 2√(1η) total queries, compared to one bad query and 1η total queries as above. A further optimization: early stopping. Greedy attacks such as repeatedly check whether a new search direction θ' θ + δ improves upon the current adversarial distance , and only if so issue a call to to compute the new distance ' <. For these attacks to progress, it may not be necessary to compute ' exactly. Instead, knowing that ' ≪ may be sufficient to know that the new direction θ' is “good” and the attack can proceed with it. We could thus stop a line-search early when ' ≤γ·—for some γ < 1. In many cases, this lets us call while incurring no bad query at all, at the expense of a less accurate distance computation. §.§ Stealthy Variants of Decision-based Attacks We now design stealthy variants of prior decision-based attacks, by applying the toolkit of stealthy search procedures outlined above, and illustrated in <Ref>. Stealthy distance computations. The most obvious way to make existing attacks more stealthy is to instantiate every call to with a (k-stage) line-search instead of a binary search. In contrast, calls to on arbitrary directions θ' are hard to make more stealthy. This change applies to the boundary distance computation in , to the gradient-estimation queries in , and to the step-size searches and boundary projections in , and . Since only calls , it cannot easily be made more stealthy. Stealthy gradients. Attacks like , and use most of their queries for estimating gradients. The main difference is that instead of calling , uses more expensive calls to to get a better estimation. Prior work shows that this tradeoff is suboptimal in terms of total queries. However, the extra precision comes for free when we consider the cost in bad queries! Recall that yields two bits of information per bad query, while with a line-search yields log1η bits. Thus, 's gradient estimator is strictly better if we consider bad queries. In <Ref> we formally prove that (under mild conditions) 's gradient estimator gives quadratically better convergence rates (in terms of bad queries) than the gradient estimators of and . We can leverage this insight to design stealthy “hybrid” attacks that combine 's stealthy gradient estimator with efficient components of other, newer attacks. Stealthy hyper-parameters. Prior attacks were designed with the goal of minimizing the total number of queries. As a result, their hyper-parameters were also tuned for this metric. When considering our asymmetric query cost, existing hyper-parameters might thus no longer be optimal. Our attacks. We combine the above principles to design stealthy variants of existing attacks. * : As in the original attack, in each iteration we first greedily check if a new search direction improves the boundary distance and then replace the binary search for the new distance by a (k-stage) line-search, optionally with early-stopping (see <Ref>). * : The attack is perfectly amenable to stealth as it only calls . We replace the original binary search by a (k-stage) line-search in each of these distance computations. When computing distances in random directions for estimating gradients, we need to select a safe starting point for the line search. If the current boundary distance is , we start the search at the point at distance (1+γ)· along θ', for γ > 0. If this point is not misclassified (i.e., the query is bad), we return (1+2·γ)· as an approximate distance. If the point is misclassified (i.e., safe), we perform a line-search with tolerance 1η. We use γ=1% in all experiments. * : In each iteration, we use line searches to compute the current boundary distance, and the update step-size. We replace the original coarse-grained gradient estimator (which calls n times) with 's estimator (with n20 calls). * : We make the same changes as for , except that we retain the original coarse-grained gradient estimator (otherwise this would be the same as ). To better balance the number of bad queries used in different attack phases, we reduce the number n of queries used to estimate gradients. This change is sub-optimal if we care about the attack's total number of queries, but is beneficial in terms of bad queries as the attack now spends a larger fraction of work on queries that can be made more stealthy. § EVALUATION We evaluate our stealthy decision-based attacks on a variety of benchmarks, in order to show that our attacks can drastically reduce the number of bad model queries compared to the original attacks. §.§ Setup Datasets and models.   We consider four benchmarks: * We begin with standard untargeted attacks on ImageNet against a ResNet-50 classifier. We mark a query as bad if it is classified into the class of the original input. * To capture more realistic security-critical scenarios, we consider a variety of binary classification tasks that aim to separate “good” from “bad” data. As a toy benchmark, we use a binary labeling of ImageNet (hereafter ImageNet-Dogs), with all dog breeds grouped as the “bad” class. The classifier is also a ResNet-50, with a binary head finetuned over the ImageNet training set. * We then consider a NSFW classification task with a CLIP classifier that was used to sanitize the LAION dataset <cit.>. To avoid collecting a new NSFW dataset, we use a subset of ImageNet (hereafter “ImageNet-NSFW”) that this classifier labels as NSFW with high confidence.[We do not collect a new NSFW dataset due to the ethical hazards that arise from curating such sensitive data. By using a subset of ImageNet—the most popular image dataset in machine learning research—we mitigate, but do not completely eliminate <cit.>, the potential harms of constructing a NSFW dataset.] * Finally, we evaluate a black-box commercial NSFW detector, using our ImageNet-NSFW dataset. The detector returns a score from 1 to 5, denoting that the input is “highly unlikely” to “highly likely” to contain adult content or nudity. We consider a query to be bad if it gets a score of 4 or 5. Attacks.   We evaluate , , and for ℓ_2 attacks, and and for ℓ_∞ attacks. We adapt each attack's official code to enable counting of bad queries. We use each attack's default hyper-parameters, except for some optimizations by <cit.> (see Appendix <ref>). We further evaluate our stealthy versions of , , and . For , , and we split search intervals into 10,000 sub-intervals, and perform either a full line-search or a two-stage line-search with 100 coarse-grained and fine-grained steps. For efficiency sake, we perform two-stage line-searches in all our experiments and use the results to infer the number of queries incurred by a full line-search. For , we further trade-off the query budgets for computing gradients and step-sizes by reducing the attack's default number of gradient queries n by a factor k∈{1.5, 2.0, 2.5}. For , we replace each binary search with a line-search of step-size η=10^-3 (the default binary-search tolerance for ) and implement early-stopping with γ=0.9. Metrics. As in prior work, we report the median ℓ_p norm of adversarial examples after N attack queries (except we only count bad queries). For each task, we run the attacks on 500 samples from the corresponding test set (for ImageNet-Dogs, we only attack images of dogs). For the attacks on the commercial NSFW detector, use use 200 samples from ImageNet-NSFW. Our motivation for counting bad queries is to assess whether black-box attacks are viable for attacking real security systems. We thus focus on a “low” query regime: each attack can make at most 1,000 bad queries per sample. Prior work has considered much larger query budgets, which we disregard here as such budgets are likely not viable against systems that implement any query monitoring. §.§ Results The main results of our evaluation appear in <Ref>. We also provide a full ablation over different attack variants and optimizations in <Ref>. For all benchmarks, our stealthy attacks (with 1-stage line searches) issue significantly fewer bad queries than the corresponding original attack. ℓ_2 attacks. Remarkably, while is one of the earliest and least efficient decision-based attacks, our variant is stealthier than the newer and attacks. To reach a median ℓ_2 perturbation of 10 on ImageNet, needs 686 bad queries, a saving of 7.3× over the original , and of 1.4× compared to . Our hybrid attack is the stealthiest attack overall. On all three benchmarks, it requires 1.47–1.82× fewer bad queries than to reach a median perturbation of 10. This shows that we can even improve the stealthiness of attacks that do not make use of many distance queries. Our techniques are thus likely also applicable to other decision-based attacks that follow 's blueprint. <Ref> in the appendix shows the total number of queries made by our stealthy attacks. As expected, our stealthy attacks issue many more queries in total than attacks that optimize for this quantity. To reach a median perturbation of ϵ=10, our attacks make 350–1420× more total queries than the original non-stealthy attack. This large increase is only warranted if benign queries are significantly cheaper than bad queries. This may be the case in some applications, e.g., uploading 1,000 benign images is permitted on platforms like Facebook <cit.>, and thus likely less suspicious than a single bad query. However, for less extreme asymmetries in query costs (e.g., c_bad = 10 · c_0), a less strict tradeoff between bad and good queries is warranted. We will explore this in <Ref>. In <Ref>, we further show the total of our attacks for various configurations of the query costs c_0 and c_bad. A different attack variant is optimal depending on the cost overhead of bad queries. ℓ_∞ attacks. The cost-effectiveness of stealthy ℓ_∞ attacks is better. Our attack reduces bad queries compared to the original , which is itself more efficient than . To reach a median norm of ϵ=8255, needs 103–181 bad queries for the three benchmarks, 2.1–2.4× less than , and 7–17× less than . As issues only 2.1–3.4× more queries than (see <Ref>), it is clearly cost effective if c_bad≫ c_0. §.§ Trading off Good and Bad Queries Our stealthy attacks in <Ref> use full line-searches, which use a single bad query (and many good queries). In <Ref> and <Ref> we consider alternative tradeoffs. We provide a full ablation over different attack variants and optimizations in <Ref>. For ℓ_∞ attacks, with a two-stage line-search and early stopping provides a nice tradeoff: for a median perturbation of ϵ=8255, the attack makes 1.37× more bad queries than a full line-search, but 3.7× fewer total queries. This attack is actually strictly better than the original (thanks to early stopping): our attack makes 1.77× fewer bad queries, and 8% fewer good queries! For ℓ_2 attacks, with a two-stage line-search shows a nice tradeoff over the original : for a median perturbation of ϵ=10, our attack makes 4× fewer bad queries, at the expense of 5× more good queries (see <Ref>). Unfortunately, none of our stealthy attacks with two-stage line searches beat the original in terms of bad queries. Thus, attaining state-of-the-art stealthiness with our techniques does appear to come at the expense of a large overhead in good queries. As a result, improving the total cost of existing ℓ_2 decision-based attacks may be hard, and thus attacking real security-critical systems with these attacks may simply not be cost-effective. §.§ Attacking a Black-box NSFW Detector We now turn to a much more realistic attack scenario where we target a commercial black-box detector of NSFW images. The few attacks that have been evaluated against commercial systems (e.g., the , or Qeba <cit.>) used a limited number of attack samples (3 to 5) due to the high query cost—and thus monetary cost—of evaluating these attacks against a commercial API. To enable a more rigorous evaluation, we focus here on —the only attack we evaluated that reliably finds small adversarial perturbations on a limited query budget (500 queries). Since real black-box systems expect 8-bit RGB images as input, we set 's threshold η for a binary search or line-search to 1255, the smallest distance between two distinct RGB images. This is much coarser than the default threshold of η=10^-3, and the attack thus finds larger perturbations. Other decision-based attacks face similar quantization issues when applied to real black-box systems. We evaluate and on 200 images from ImageNet-NSFW. <Ref> shows the results. Evading this commercial detector is much harder than the prior models we attacked, presumably due to the discretization constraint described above. Our attack outperforms by 2.2× (we reach a median distance of 32255 with 79 bad queries, while needs 172 bad queries). These perturbations are noticeable, but preserve the images' NSFW nature. § RELATED WORK Threat models for ML evasion attacks.   Modeling realistic ML evasion attacks is challenging <cit.>. Our work contributes to this goal by introducing the more realistic asymmetric query cost metric, and evaluating the feasibility of stealthy decision-based attacks. Prior work has attacked real security-critical ML systems such as malware detectors <cit.>, copyright systems <cit.>, or online content blockers <cit.>. These works either assume white-box model access, or use black-box transfer attacks. The latter are perfectly stealthy (they make no bad queries) but have limited success rates. Detecting decision-based attacks.   <cit.> and <cit.> detect decision-based attacks by monitoring sequences of user queries. We aim to evade a more fundamental form of monitoring that any security-critical system likely uses: flagging and banning users who issue many “bad” queries. Stealthy score-based attacks.   Score-based attacks, which query a model's confidence scores <cit.>, also issue many bad queries. Designing stealthy score-based attacks is similar to the problem of “safe black-box optimization” in reinforcement learning <cit.>. § CONCLUSION Our paper initiates the study of stealthy decision-based attacks, which minimize costly bad queries that are flagged by a ML system. Our “first-order” exploration of the design space for stealthy attacks shows how to equip existing attacks with stealthy search procedures, at a cost of a larger number of benign queries. Decision-based attacks may be made even stealthier by designing them from scratch with stealth as a primary criterion. We leave this is an open problem we hope future work can address. We hope our paper will pave the way towards more refined analyses of the cost of evasion attacks against real ML systems. In particular, our paper suggests a new possible defense metric for defenses designed to resist black-box attacks: the number of bad queries before an attack is effective. plainnat § DETAILS ON EXPERIMENTAL SETUP §.§ Datasets and Models ImageNet. We run the attacks against a ResNet-50 <cit.> classifier trained on ImageNet <cit.>. We use the model weights provided as part of the torchvision library <cit.>, which reach 76.13% validation accuracy. When running the attacks, we use ImageNet's validation set and we skip the samples that are already classified incorrectly by the model. ImageNet-Dogs. We create a binary classification task from ImageNet by considering as “bad” the images belonging to classes of dog breeds (i.e., the classes with indices included in the range [151, 268]) and as “good” the images belonging to all the other classes. We create training and validation sets in this way from the respective splits of ImageNet. Then, we take the ResNet-50 provided by torchvision, change the last linear layer to a layer with one output, and fine-tune this model for one epoch on the training set, using Adam <cit.> with learning rate 10^-3. Training the model takes around 1 hour using an Nvidia RTX A6000. The final model has 96.96% accuracy, 87.14% precision, and 87.10% recall on the validation set. Since we are interested in creating adversarial examples for the “bad” images, we only attack the images in the validation set that are correctly classified as “bad” (i.e., as dogs) by the fine-tuned model. ImageNet-NSFW. As mentioned in <ref>, we also evaluate the attacks on the NSFW content detector shared by <cit.>. This classifier takes as input CLIP <cit.> embeddings of images and outputs a confidence in [0, 1]. We use the CLIP implementation provided by the HuggingFace Transformers library <cit.> to extract the CLIP embeddings from the input images. To create an evaluation set of NSFW images, we select the subset of 1,000 images in the ImageNet validation set that the NSFW content detector classifies as NSFW with highest confidence (it is well known that ImageNet contains NSFW content <cit.>). When attacking the model, we consider an attack to be successful if the confidence of the detector drops below 0.5. §.§ Attack Hyper-parameters . We use the official implementation[<https://github.com/bethgelab/foolbox/blob/1c55ee/foolbox/attacks/boundary_attack.py>], which is part of from Foolbox <cit.>, with default hyper-parameters on all tasks. . We use the official implementation.[<https://github.com/Jianbo-Lab/HSJA/blob/daecd5/hsja.py>] Following <cit.>, we set = 10,000 (this hyper-parameter is used to determine the binary search threshold), as this gives better results. and . We use the official implementation.[<https://github.com/uclaml/RayS/blob/29bc17/RayS.py>] The attack has no hyper-parameters. The default binary search tolerance is η=10^-3. For the line-search in we use the same step-size of 10^-3 and perform either a full line-search, or a two-stage search by first dividing the N search intervals into coarse groups of size √(N). For attacking the commercial black-box NSFW classifier in <Ref>, we set the binary search tolerance and line-search step-size to η=1255 and perform a full line-search. For the early-stopping optimization, we end a line search if ' < 0.9 ·. In <Ref>, <Ref> and <Ref>, the attack is the version with a full line-search and early-stopping. and . We use the official implementation.[<https://github.com/cmhcbb/attackbox/blob/65a82f/attack/OPT_attack.py>] Following <cit.>, we set β = 10^-2 (this hyper-parameter is used to determine the binary search threshold). For , we do line-searches for gradient estimation in the interval [0.99·, 1.01·], where is the current adversarial distance. For computing step-sizes, we do a line-search in the interval [0.99·, ], since we only care about the new distance if it improves upon the current one. We split this interval into N=10,000 sub-intervals and perform a 2-stage line-search with 100 coarse-grained steps and 100 fine-grained steps. For efficiency sake, we batch the line-search by calling the model on two batches of size 100, one for all coarse-grained steps, and one for all fine-grained steps. To count the number of bad queries and total queries, we assume that the line-search queries were performed one-by-one. If the first query in a line-search is not safe (i.e., the boundary distance is larger than 1.01·, we approximate the distance by ' ≈ 2·. In <Ref> and <Ref>, the attack is the version with a full line-search. and . We use the official implementation.[<https://github.com/cmhcbb/attackbox/blob/65a82f/attack/Sign_OPT.py>] Following <cit.>, we set β = 10^-2 (this hyper-parameter is used to determine the binary search threshold). For , we do the same line-search procedure as for computing step-sizes. We change the default number of gradient estimation queries per iteration from n=200 to n/k for k ∈{1.5, 2, 2.5, 3}, i.e., n ∈{67, 80, 100, 133}. In <Ref> and <Ref>, the attack uses a full line-search, and k=2.5. §.§ Compute and code We run every attack on one Nvidia RTX 3090, and the time to run the attacks on 500 samples ranges from twelve hours, for the attacks ran with binary search, to more than three days for the slowest attacks (e.g. OPT) ran with line search. We wrap all the attack implementations in a common set-up for which we use PyTorch <cit.>. The code can be found at the following URL: <https://github.com/ethz-privsec/realistic-adv-examples/>. The checkpoints of the model we trained, the NSFW classifier we ported from Keras to PyTorch, and the outputs of this model on the ImageNet train and validation datasets can be found at the following URL: <https://github.com/ethz-privsec/realistic-adv-examples/releases/tag/v0.1>. § DETAILS ON DECISION-BASED ATTACKS In this section, we provide some additional detail on how existing decision-based attacks work, and how they spend their bad queries. As explained in <Ref>, existing decision-based attacks optimize over some adversarial direction θ∈ [0, 1]^d by repeatedly: (1) computing the boundary distance from x along θ; (2) computing an update direction δ; and (3) picking a step-size α, in order to perform an update step θθ + α·δ. We can thus split each attack iteration into three phases: * : given the original input (x, y) and a search direction θ, this phase finds a point x_b that lies on the model's decision boundary along the line x + α·θθ., and returns the ℓ_p distance between x and x_b, i.e., x - _p. * : This phase searches for an update direction δ to be applied to the search direction θ. * : This phase selects a step-size α for an update to the search direction θ. We now describe how different attacks instantiate these generic phases and how they use the and routines in each phase. . The original decision-based attack of <cit.> is a greedy attack. In contrast to other attacks, it only performs a heuristic, approximate projection to the model's boundary in each step. : Given a misclassified point x_b along the direction θ (originally a natural sample from a different class than x), the attack samples random points around x_b and checks on which side of the boundary they fall. From this, the attack estimates a step-size to project x_b onto the boundary, and then computes the distance between x_b and x. This requires n calls to . : The attack is greedy and simply picks a small update direction δ at random. : The attack checks whether the distance to the boundary along the new direction θ + δ is smaller than the current distance, . If not, the update is discarded. Note that this test can be performed with a single query to the model, with a call to . . This is a greedy attack similar to , tailored to the ℓ_∞ norm. Its search direction θ∈{-1, +1}^d is always a signed vector. : find the current distance to the decision boundary using a binary search, by calling . : The attack picks a new search direction by flipping the signs of a all pixels in a rectangular region of θ. : The attack greedily checks whether the new direction improves the current distance to the boundary, by issuing a call to . If the distance is not reduced, the update is discarded. . This attack first proposed a gradient-estimation approach to decision-based attacks. : The attack starts by measuring the distance to the boundary, with a call to . Specifically, it performs a binary search between x and some point of a different class along the direction θ. : The attack estimates the gradient of the distance to the boundary along the search direction θ. To this end, it samples random directions r_1, …, r_n and computes the distance to the boundary along θ + r_i, denoted as d_i ∈^+, for each. The estimated gradient is then: δ1/n∑_i=1^n ( - d_i)· r_i . The attack uses n calls to to compute the boundary distance along each random direction. : computes the step-size α with a geometric search: starting from a small step size, double it as long as this decreases the distance to the decision boundary along the new direction θ + α·δ. Thus, each step of the geometric search involves a call to and . These attacks are very similar, and improve over by using a more query-efficient gradient-estimation procedure. : In , this step is viewed as a boundary “projection” step which returns the point x_b on the boundary, while computes the distance from x to the boundary along θ. But the two views, and their implementations, are equivalent. Both attacks use a binary-search to find a point x_b on the boundary, as in , with a call to . : Both attacks also sample n random search directions r_1, …, r_n. But instead of computing the distance to the boundary along each updated direction as in , and simply check whether each update decreases the current distance to the decision boundary or not. The update direction is computed as δ1/n∑_i=1^n z_i · r_i , where z_i ∈{-1, +1} is one if and only if the point at distance along the direction θ+r_i is misclassified. differs slightly in that the random directions r_i are applied to the current point on the boundary , and we check whether +r_i is misclassified or not. Compared to , these attacks thus only issue n calls to (instead of n calls to ), but the gradient estimate they compute has higher variance. : uses the exact same geometric step-size search as . is slightly different from the generic algorithm described above, in that it applies the update δ to the current point on the boundary . The attack starts from a large step size and halves it until + α·δ is misclassified. This amounts to finding the distance to the boundary from along the direction δ, albeit with a geometric backtracking search instead of a binary search. Summary of attacks. <Ref> summarizes how different attacks implement the three generic attack phases , and in each attack iteration. We distinguish here between the two routines called by the attacks, and defined in <Ref>. § CONVERGENCE RATES OF STEALTHY ATTACKS Prior work has analyzed the convergence rate of SGD with the zero-order gradient estimation schemes used in and <cit.>. We can use these results to prove that the gradient estimation of our attack is asymptotically more efficient (in terms of bad queries) than the non-stealthy gradient estimation used by and . Let g(θ) be the distance to the boundary along the direction θ, starting from some example x (this is the function that and explicitly minimize). Suppose we optimize g with black-box gradient descent, using the following two gradient estimators: * : 1/Q∑_i=1^Q(g(θ + r_i) - g(θ)) · r_i for Q random Gaussian directions r_i. * : 1/Q'∑_i=1^Q'(g(θ + r_i) - g(θ)) · r_i for Q' random Gaussian directions r_i. We can then show the following results: Assume g has gradients that are L-Lipshitz and bounded by C (assume L and C are constants for simplicity). Let d be the data dimensionality. Optimizing g with T iterations of gradient descent, using 's gradient estimator, yields a convergence rate of [∇ g(x)_2^2] = 𝒪(d/T), with 𝒪(T^2/d) bad queries. Assume g is L-Lipschitz and has gradients bounded by C (assume L and C are constants for simplicity). Let d be the data dimensionality. Optimizing g with T iterations of gradient descent, using 's gradient estimator, yields a convergence rate of [∇ g(x)_2] = 𝒪(√(d / T)), with 𝒪(T^2d) bad queries. The convergence rate of is thus at least as good as that of , [Note that <cit.> provide a bound on the gradient norm, while <cit.> provide a bound on the squared gradient norm. Applying Jensen's inequality to the result of <Ref>, we know that for we have [∇ g(x)_2^2] ≥ ([∇ g(x)_2])^2 = 𝒪(d / T).] but 's gradient estimator with line searches requires a factor d^2 fewer bad queries. The same asymptotic result as for holds for the similar estimator used by . <cit.> show that 's gradient estimator yields a convergence rate of [∇ g(x)_2^2] = 𝒪(d/T) + 𝒪(1/Q) (see Theorem 2 in <cit.>). To balance the two convergence terms, we set Q = T/d. To perform Q evaluations of g(θ + r_i) - g(θ), we need Q+1 calls to . Each call makes multiple queries to the model f, but only one bad query if we use a line search. This yields the number of bad queries in the theorem (T iterations with T/d bad queries per iteration). <cit.> show that 's gradient estimator yields a convergence rate of [∇ g(x)_2] = 𝒪(√(d/T)) + 𝒪(d/√(Q')) (see Theorem 3.1 in <cit.>). To balance the two convergence terms, we set Q' = Td. To perform Q' evaluations of (g(θ + r_i) - g(θ)), one call to and Q' calls to are required. Each call makes a single query to the model f, i.e., 1/2 bad queries on average. This yields the number of bad queries in the theorem (T iterations with Td/2 bad queries per iteration). § ADDITIONAL FIGURES
http://arxiv.org/abs/2306.07575v2
20230613065347
A Half de Sitter Holography
[ "Taishi Kawamoto", "Shan-Ming Ruan", "Yu-ki Suzuki", "Tadashi Takayanagi" ]
hep-th
[ "hep-th", "gr-qc" ]
=1 figures/ ⟩ ⟩ ⟨ ⟨ equationsection ∂̅ ∂ ^∘_∘ #̣1 d#1
http://arxiv.org/abs/2306.03449v1
20230606070540
Tidal evolution for any rheological model using a vectorial approach expressed in Hansen coefficients
[ "Alexandre C. M. Correia", "Ema F. S. Valente" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR", "physics.geo-ph" ]
Astron. J. Astrophys. J. Astrophys. J. Astrophys. J. (Supp.) Astron. Astrophys. Astrophys. Space Sci. Bull. Am. Meteorol. Soc. Bull. Seismol. Soc. Am. Eos Trans. AGU Earth Planet. Sci. Lett. Geochim. Cosmochim. Acta Geophys. J. Int. Geophys. J. R. Astron. Soc. Geophys. Res. Lett. Geol. Soc. Am. Bull. Icarus J. Atmos. Terr. Phys. J. Geophys. Res. J. Phys. Oceanogr. Mon. Not. R. Astron. Soc. Mon. Weather Rev. Nature Phys. Earth central body. Inter. Plan. Space Sci. Phys. Rev. A Phys. Rev. D Phys. Rev. Lett. Publ. A. S. P. Q. J. R. Meteorol. Soc. Rev. Geophys. Radio Sci. U.S. Geol. Surv. Open File Rep. U.S. Geol. Surv. Prof. Pap. === #1#1 iF_+F_- Gm_0m ωυ ϖϕx sinθ y zxyzx̂_1x̂_2x̂_3 erk×pq suLFGRT I S k' ∑_k=-∞^+∞ ∑_=-∞^+∞k_ fττ_eτ_aτ_v√(1-e^2) E_0 T_0 K_e K_ta(-kn)a(-kn)a(2-kn)b(-kn)b(-kn)b(2-kn)X_k^-3,0X_k^-3,1X_k^-3,-1X_k^-3,2X_k^-3,-2 eept' ∫ #1 #1#1 Tidal evolution using a vectorial approachA.C.M. Correia E.F.S. ValenteA. C. M. Correia CFisUC, Departamento de Física, Universidade de Coimbra, 3004-516 Coimbra, Portugal IMCCE, Observatoire de Paris, PSL Université, 77 Av. Denfert-Rochereau, 75014 Paris, France E. F. S. Valente CFisUC, Departamento de Física, Universidade de Coimbra, 3004-516 Coimbra, Portugal Tidal evolution for any rheological model using a vectorial approach expressed in Hansen coefficients. Alexandre C. M. Correia Ema F. S. Valente July 31, 2023 ====================================================================================================== We revisit the two body problem, where one body can be deformed under the action of tides raised by the companion. Tidal deformation and consequent dissipation result in spin and orbital evolution of the system. In general, the equations of motion are derived from the tidal potential developed in Fourier series expressed in terms of Keplerian elliptical elements, so that the variation of dissipation with amplitude and frequency can be examined. However, this method introduces multiple index summations and some orbital elements depend on the chosen frame, which is prone to confusion and errors. Here, we develop the quadrupole tidal potential solely in a series of Hansen coefficients, which are widely used in celestial mechanics and depend just on the eccentricity. We derive the secular equations of motion in a vectorial formalism, which is frame independent and valid for any rheological model. We provide expressions for a single average over the mean anomaly and for an additional average over the argument of the pericentre. These equations are suitable to model the long-term evolution of a large variety of systems and configurations, from planet-satellite to stellar binaries. We also compute the tidal energy released inside the body for an arbitrary configuration of the system. § INTRODUCTION Tidal effects arise when an extended body is placed in a differential gravitational field, such as the one generated by a point mass companion. For non-rigid bodies, each mass element adjusts to the equipotential surface, which gives rise to a global redistribution of mass known as tidal bulge. In the process, the friction inside the extended body introduces a delay between the initial perturbation and the maximal deformation. As a consequence, the companion exerts a torque on the tidal bulge of the extended body, which modifies its spin and the orbit. The estimates for the tidal evolution of a body are based on a very general formulation of the tidal potential, initiated by <cit.>. Assuming a homogeneous body consisting of an incompressible fluid with constant viscosity, derived a tide-generated disturbing potential expanded into a Fourier series expressed in terms of elliptical elements. The equations of motion are then obtained using the Lagrange planetary equations <cit.>. <cit.> writes the tidal potential using Love numbers <cit.>. Each term of the Fourier series involves a Love number associated with the amplitude of the tide and a phase lag accounting for the non-instantaneous deformation of the body. This description is more general than 's, because it makes no assumption on the rheology of the body <cit.>. Different rheologies have been proposed for satellites, rocky planets, giant gaseous or stars <cit.>, some because of their simplicity, others motivated by theoretical studies, laboratory experiments or geophysical measurements. The description by also has a more compact and systematic form of the tidal potential, thus being more convenient for examining the tidal effects of varying conditions. However, sometimes there is an ambiguity in the interpretation of the frequencies involved in the phase lags <cit.>. Moreover, mistakes such as neglecting energy or momentum conservation considerations are more easily made. Indeed, <cit.> have shown that additional terms need to be included owing to the conservation of the angular momentum and non-inertial frame considerations. The classical expansion of the tidal potential in elliptical elements depends on the chosen frame and also introduces multiple index summations, which can lead to confusion and errors in the equations of motion. In the case of a linear constant time-lag model, which can be seen as a first order expansion of a viscoelastic rheology <cit.>, it has been shown that the equations of motion are more easily expressed in terms of angular momentum vectors <cit.>. Therefore, in this paper we aim to also use these vectors for any rheological model that can be expressed in terms of Love numbers. This formalism is independent of the reference frame and allows to simply add the tidal contributions of multiple bodies in the system. In Sect. <ref>, using the quadrupole tidal potential expanded in series of Hansen coefficients, we obtain the equations of motion using vectors and Love numbers. In Sect. <ref> and Sect. <ref>, we average the equations of motion over the mean anomaly, and over the argument of the pericentre, respectively, which provide simpler expressions that are easily to implement and suitable for long-term evolution studies. In Sect. <ref>, we provide the equations of motion for the more simple planar case, and in Sect. <ref>, we explain how the equations of motion simplify for the linear constant time-lag model. Finally, we discuss our results in Sect. <ref>. § TWO BODY PROBLEM WITH TIDES tbpwt We consider a system of two bodies with masses and in a Keplerian orbit. The orbital angular momentum is given by <cit.> = β√(μ a (1-e^2)) , where a is the semi-major axis, e is the eccentricity, β = / ( + ), μ = ( + ), is the gravitational constant, and is the unit vector along the direction of , which is normal to the orbit. The body with mass , named as “perturber”, is a point mass object. The body with mass , named as “central body”, is an extended body and can be deformed under the action of tides. It rotates with angular velocity =, where is the unit vector along the direction of the spin axis. The rotational angular momentum is given by = C + · , 210804b where C is the moment of inertia of a sphere and = [[ I_11 I_12 I_13; I_12 I_22 I_23; I_13 I_23 I_33 ]] is the inertia tensor that accounts for the departure from the sphere. In the absence of the perturber, the central body is a perfect sphere, and so =0. However, when the perturber is present, the central body is deformed, and can change. In this work we assume that the central body is incompressible, and thus tr()=I_11 + I_22 + I_33 =0<cit.>. In general, tidal deformations are small and yield periodic changes in the moments of inertia, such that ≪ C<cit.>. Therefore, for simplicity, we can assume that the deformation is small with respect to the radius of the unperturbed sphere, R, such that I_ij≪ C (i,j=1,2,3), and ≈ C = C . 151019b §.§ Potential energy of a non-spherical body The gravitational potential of the central body at a generic position from its centre of mass is given by <cit.> V () = - /r + Δ V () 210805a with Δ V () = 3 /2 r^3 ·· , 121026b where r = |||| is the norm, = / r = (, , ) is the unit vector, and we neglect terms in (R/r)^3 (quadrupolar approximation). The point mass interacts with the potential of the central body (Eq. (<ref>)). The non-spherical contribution of this potential, Δ V (), is responsible for the perturbations to the Keplerian motion. The corresponding potential energy, U () = Δ V (), can be rewritten as U () = 3 /2r^3[ (I_22-I_11) (^2 - 1/3) + (I_33-I_11) (^2 -1/3) + 2 ( I_12 + I_13 + I_23) ] . 191014a §.§ Reference frames In this work we use two different reference frames, one attached to the orbit of the perturber and another to the spin axis of the central body (see Fig. <ref>). The reason for this choice is that the unit vectors and are easily directly obtained from the orbital (Eq. (<ref>)) and rotational (Eq. (<ref>)) angular momentum vectors, respectively. In general, these two vectors are not collinear, and so in a first step we need to build two independent frames as in <cit.>. We let (, , ) to be a cartesian frame such that is aligned with the normal to the orbit, and = / e is aligned with the pericentre of the orbit and obtained from the Laplace vector = ×/βμ - . 210805d We further let (,,) to be a cartesian frame, such that = ×/sinθ , = × = - cosθ /sinθ , where is aligned with the spin axis, and is aligned with the line of nodes between the equator of the central body and the orbital plane of the perturber (Fig. <ref>). The angle θ corresponds to the angle between and , that is, cosθ = ·. These two frames are also known as “precession frames”, as they follow the precession of the pericentre and node, respectively. They are connected through the rotation matrix [[ ; ; ]] = [[ cos cosθsin - sinθsin; - sin cosθcos - sinθcos; 0 sinθ cosθ ]] [[ ; ; ]] , 210804d where is the argument of the pericentre. We note that these two frames are not inertial, because the orbit and the spin can evolve due to tidal interactions. However, the changes are very small during an orbital period, and so the two frames can be assumed as constant. The position vector can be expressed in these two frames as = cos + sin = + + , where is the true anomaly, and = · = cos · + sin ·/sinθ = cos (+) , = · = - cos cosθ ·/sinθ + sincosθ · = cosθsin (+) , = · = cos · - sinsinθ · = - sinθsin (+) . Similarly, = n a/√(1-e^2) ×( + ) = n a/√(1-e^2) ( (cos + e) × - sin ) ) , with · = - n a/√(1-e^2) ( sin ( + ) + e sin) , · = n a/√(1-e^2)cosθ ( cos ( + ) + e cos) , · = -n a/√(1-e^2)sinθ ( cos ( + ) + e cos) , where n = √(μ / a^3) is the mean motion. §.§ Equations of motion pmp The tidal force between the two bodies is easily obtained from the potential energy (Eq. (<ref>)) as = - ∇ U () = _1 + _2 , 170911d with _1 = 15 /r^4[ I_22-I_11/2(^2 - 1/5) + I_33-I_11/2(^2 - 1/5) + I_12 + I_13 + I_23] , 170911b and _2 = - 3 /r^4[ (I_22-I_11) + (I_33-I_11) + I_12 ( + ) + I_13 ( + ) + I_23 ( + ) ] . 170911c We decompose and the following vectorial quantities in the frame (, , ), as this greatly facilitates the computation of the inertia tensor (Sect. <ref>). There is no loss of generality with this choice, because the vectors (, , ) can always be expressed in another basis. In particular, we follow the evolution of the system in an inertial frame, since nothing forbids to project an inertial vector on a non-inertial coordinate system. We obtain for the orbital evolution of the system = - μ/r^2 + F/β . 151028c The first term corresponds to the Keplerian motion, while the second one corresponds to the correction from the tidal force. From a secular evolution perspective, we only care about the modifications in the orbit and spin that are brought by the second term. Therefore, the evolution of the systems is better described by the evolution of and for the orbit and for the spin. The evolution of the angular momentum vectors is computed from the gravitational torque. In an inertial frame, we have: = = ×F = ×_2 , 150626a and, owing to the conservation of the total angular momentum, = - = - , 210805b with 151028e = - 3 /r^3{ [ (I_33-I_22) - I_12 + I_13 + I_23 (^2 - ^2) ] + [ (I_11-I_33) + I_12 + I_13 (^2 - ^2) - I_23] + [ (I_22-I_11) + I_12 (^2 - ^2) - I_13 + I_23] } . The evolution of the Laplace vector is obtained by differentiating expression (<ref>) = 1/βμ( /β× + ×) . 210805c §.§ Deformation of the central body The mass distribution inside the central body is characterised by the coefficients of the inertia tensor, (Eq. (<ref>)). It is the result of self gravity and the response to any perturbing potential. Here, we consider that the central body is an extended body that deforms solely under the action of the differential gravitational force from the companion body of mass . A mass element d m at a location ' is thus subject to the perturbing potential <cit.> V_ (') = -/r(r'/r)^2 P_2 (' ·) , 121026e where is the position of the mass with respect to the centre of mass of the central body, P_2(x) = (3 x^2 -1)/2 is a Legendre polynomial, and we neglected terms higher than (r'/r)^3, that is, we consider only the quadrupole perturbations. Thus, on the central body's surface, , the non-spherical contribution of the perturbing potential is given by V_ () = -/r(R/r)^2 P_2 (·) , 210908a where R = ||. For simplicity, since the surface of the central body is nearly spherical, we can assume R to be constant and equal to the average radius. The above potential can be rearranged as V_ () = 3 /2 R^3 ·_· , 171110x with _/ R^2 = - /( R/r )^3 ( ^T - 1/3𝕀) , 121030b where ^T denotes the transpose, and 𝕀 is the identity matrix. The “perturbing” tensor _ can be seen as a perturbation of the inertia tensor, (Eq. (<ref>)). A convenient way of handling the deformation is through the Love number approach <cit.>. It assumes that the deformations are small and can be made linear, that is, proportional to the perturbing potential. The total tidal potential is thus a linear combination of the tidal responses to each excitation V_ (, ) over time, with ≤ t. In the frame of the central body, B, for any point of its surface we thus have Δ V (, t) = k_2(t-) V_ (, ) d = k_2(t) * V_ (, t) , 210909a where * is the convolution product. k_2(t) is a Love distribution such that k_2(t) = 0 for all t<0, which depends on the internal structure of the central body <cit.>. The knowledge of Δ V (, t) is sufficient to constrain Δ V (, t), because it is the solution of a Dirichlet problem. Therefore, by comparing expressions (<ref>) and (<ref>) with the gravitational potential (Eq. (<ref>)) taken at the central body's surface (=), we finally get for the deformation ^B(t) = k_2(t-) ^B_p() d = k_2(t) * ^B_p(t) , 200116a where ^B and ^B_ are the inertia (Eq. (<ref>)) and the perturbing (Eq. (<ref>)) tensors, respectively, expressed in the frame of the central body. §.§ Deformation in the frequency domain deformation_body The inertia tensor in the frequency domain can be obtained by performing a Fourier transform ^B(σ) = ^B(t) ^-σ t d t = k̂_2 (σ) ^B_p (σ) , 200116b where k̂_2(σ) = k_2 (t) ^-σ t d t 210929e is the second Love number, and we applied the properties of the convolution product. k̂_2(σ) is a complex number, whose modulus gives the amplitude of the tidal deformation and the argument gives the phase lag between the perturbation and the deformation. It follows that the deformation in the body frame (Eq. (<ref>)) is much easier to compute in the frequency domain (Eq. (<ref>)). However, it is a bit more difficult to compute the deformation in an arbitrary frame. We let be the rotation matrix that allows us to convert any vector _B in a frame attached to the central body into another frame , such that = _B, (t) = (t) ^B(t) (t)^T , and _p (t) = (t) ^B_p(t) (t)^T . 200116c Then (σ) = (t) ^-σ t d t = (t) ^B(t) (t)^T ^-σ t d t = k_2(t-) (t) ^B_p() (t)^T ^-σ t d d t = k_2() (+) ^B_p() (+)^T ^-σ ( + ) d d . 200116d Since (t) is a rotation matrix, we note that (+) = () (), thus (σ) = k_2() () _p() ()^T ^-σ ( + ) d d = k_2() () _p(σ) ()^T ^-σ d . 200116e At this stage, we need to explicit () to proceed. For simplicity, we adopt the frame (,,), for which we have already obtained the equations of motion (Sect. <ref>), and where is the spin axis (Fig. <ref>). Since the central body rotates about the axis with velocity , we have () = [[ cos -sin 0; sin cos 0; 0 0 1 ]] , 200116f which gives () _p(σ) ()^T = [[ -ΔÎ^cos 2 - Î_12^sin 2 -1/2Î_33^ , -ΔÎ^sin 2 + Î_12^cos 2 , Î_13^cos - Î_23^sin -ΔÎ^sin 2 + Î_12^cos 2 , ΔÎ^cos 2 + Î_12^sin 2 -1/2Î_33^ , Î_13^sin + Î_23^cosÎ_13^cos - Î_23^sin , Î_13^sin + Î_23^cos , Î_33^ ]] , 200117a where ΔÎ^ = 1/2 (Î_22^ - Î_11^). We thus have, for instance, 200117bÎ_13 (σ) = k_2() [ Î_13^ (σ) cos - Î_23^ (σ) sin] ^-σ d = Î_13^ (σ) k_2() cos ^-σ d - Î_23^ (σ) k_2() sin ^-σ d = 1/2k̂_2 (σ-) [ Î_13^ (σ) + Î_23^ (σ) ] + 1/2k̂_2 (σ+) [ Î_13^ (σ) - Î_23^ (σ) ] . Similarly, we obtain for the remaining coefficients of (σ): Î_23 (σ) = 1/2k̂_2 (σ-) [ Î_23^ (σ) - Î_13^ (σ) ] + 1/2k̂_2 (σ+) [ Î_23^ (σ) + Î_13^ (σ) ] , 200117c Î_12 (σ) = 1/2k̂_2 (σ-2) [ Î_12^ (σ) + ΔÎ^ (σ) ] + 1/2k̂_2 (σ+2) [ Î_12^ (σ) - ΔÎ^ (σ) ] , 200117dÎ_33 (σ) =k̂_2 (σ) Î_33^ (σ) , 200117fÎ_22 (σ) = ΔÎ (σ) - 1/2Î_33 (σ) , 200117gÎ_11 (σ) = - ΔÎ (σ) - 1/2Î_33 (σ) , 200117h with ΔÎ (σ) = 1/2k̂_2 (σ-2) [ ΔÎ^ (σ) - Î_12^ (σ) ] + 1/2k̂_2 (σ+2) [ ΔÎ^ (σ) + Î_12^ (σ) ] . 200117e The coefficients of the inertia tensor (Eq. (<ref>)) are now expressed in the frame (,,) and can be directly used in expressions (<ref>), (<ref>) and (<ref>), provided that the coefficients of the perturbing tensor _ (Eq. (<ref>)) are also given in this frame: I_11^/ R^2 = - /(R/r )^3 ( ^2 - 1/3) , I_12^/ R^2 = - /(R/r )^3 , 151106aI_22^/ R^2 = - /(R/r )^3 ( ^2 - 1/3) , I_23^/ R^2 = - /(R/r )^3 , 151106bI_33^/ R^2 = - /(R/r )^3 ( ^2 - 1/3) , I_13^/ R^2 = - /(R/r )^3 . 151106f §.§ Hansen coefficients In order to use the coefficients of the inertia tensor (σ) (Eqs. (<ref>)-(<ref>)) in the equations of motion (Sect. <ref>) we need to return to the time domain using an inverse Fourier transform: (t) = (σ) ^σ t d σ . 210910a In general, the tidal perturbations introduced by the perturbing tensor _ (t) are periodic and thus only a discrete number of frequencies exist, that is, we can express (t) as a Fourier series: (t) = ∑_k (σ_k) ^σ_k t . 210910b In the frame (,,), the position of the perturber m_0 only depends on its orbital motion (Eq. (<ref>)). Therefore, the only frequencies are the orbital mean motion, n, and its harmonics. A convenient way of expressing _p is through the Hansen coefficients, X_k^ℓ,m<cit.>: ( r/a)^ℓ^ m = X_k^ℓ,m(e) ^ k M , 210910c with X_k^ℓ,m (e) = 1/2 π∫_-π^π( r/a)^ℓ^ (m -k M) d M = 1/π∫_0^π( 1-e^2/1+e cos)^ℓcos (m -k M) d M . 210910d For instance, for the I_23^ coefficient (Eq. (<ref>)), we get from expression (<ref>) I_23^/ R^2 = /(R/r )^3 sinθcosθsin^2 (+) = 1/4/(R/a)^3 sinθcosθ(a/r)^3 (2 - ^ 2 (+) - ^- 2 (+)) = 1/4/(R/a)^3 sinθ (2 X_k^-3,0 - ^ 2 X_k^-3,2 - ^- 2 X_k^-3,-2) ^ k M , 211001e where = cosθ = ·, and (Eq. (<ref>)) ^ϖ = · - ·/sinθ ⇒ ^ 2 ϖ = (·)^2 - ( ·/sinθ)^2 - 2 (·) (·/sinθ) . 210915a Similarly, for the I_13^ coefficient (Eq. (<ref>)), we get I_13^/ R^2 = 1/4 /(R/a)^3 sinθ ( ^ 2 X_k^-3,2 - ^- 2 X_k^-3,-2) ^ k M . 211001d Then, for the I_23 coefficient we finally have (Eqs. (<ref>) and (<ref>)) 211001f I_23 = R^5/8 a^3sinθ [ k̂_2 (kn-) (2 X_k^-3,0 - (1+) ^ 2 X_k^-3,2 + (1-) ^- 2 X_k^-3,-2) + k̂_2 (kn+) (2 X_k^-3,0 + (1- ) ^ 2 X_k^-3,2 - (1+) ^- 2 X_k^-3,-2) ] ^ k M . Since the coefficients of the _ tensor are all I_ij^∝ r^-3 (Eqs. (<ref>)-(<ref>)), in the expression of the inertia tensor can only appear Hansen coefficients of the kind X_k^-3,m. The same occurs with the torque (Eq. (<ref>)). The force (Eq. (<ref>)) and the derivative of the Laplace vector (Eq. (<ref>)) are proportional to r^-4 and we thus expect terms in X_k^-4,m. However, it is possible to provide the entire set of equations of motion only in terms of the coefficients X_k^-3,0, X_k^-3,1, and X_k^-3,2 (Table <ref>), by using some recorrence properties of the Hansen coefficients (see appendix <ref>). §.§ Tidal models tidalmodels The tidal deformation of the central body is completely described by expression (<ref>), where (σ) is obtained from the Love number k̂_2 (σ) combined with the perturbing tensor _ (σ) (Eqs. (<ref>)-(<ref>)). While the perturbing tensor is well determined, as it depends only on the position of the perturbing body (Eqs. (<ref>)-(<ref>)), the Love number is subject to large uncertainties, as it depends on the internal structure of the central body. Therefore, in order to compute k̂_2 (σ) one needs to adopt some rheological model for the deformation. A large variety of models exist, but the most commonly used are the constant-Q<cit.>, the linear model <cit.>, the Maxwell model <cit.>, and the Andrade model <cit.>. Some models appear to be better suited to certain situations, but there is no model that is globally accepted. However, viscoelastic rheologies are usually more realistic as they are able to reproduce the main features of tidal dissipation <cit.>. For a review of the main viscoelastic models see <cit.>. The Love number is a complex number, and so it can be decomposed in its real and imaginary parts as k̂_2 (σ) = a (σ) - b (σ) . 210924d This partition is very useful when we write the equations of motion (see Sects. <ref> and <ref>), because the imaginary part characterises the material's viscous phase lag and is thus directly related to the amount of energy dissipated by tides. This is why we write expression (<ref>) with a minus for the imaginary part, the deformation lags behind the perturbation and therefore the imaginary part is always negative. In addition, from expression (<ref>) we have k̂_2^* (σ) = k̂_2 (-σ), and so a (σ) = a (-σ) is always an even function and b (σ) = - b (-σ) is always an odd function. Here, we provide the expressions of a (σ) and b (σ) for the more frequently used tidal models. §.§.§ Constant-Q model A commonly used dimensionless measure of the tidal dissipation is the quality factor Q^-1 = 1/2 π E_0∮Ė d t , 131004b where the line integral over Ė is the energy dissipated during one period of tidal stress, and E_0 is the peak energy stored in the system during the same period. In general, Q is a function of the frequency, and can be related to the Love number through <cit.> b (σ) = |k̂_2(σ)|/Q (σ) . 211001a The constant-Q model assumes that both the quality factor and the norm of the Love number are constant for all frequencies, that is, Q(σ) = Q(0) = Q_0 and |k̂_2(σ)| = |k̂_2(0)| =. The quantity is the fluid Love number, which corresponds to the maximal deformation resulting from a permanent perturbation and depends only on the internal structure of the central body. For instance, for a homogenous sphere, we have = 3/2. Since b (σ) is an odd function we have a(σ) = , and b(σ) = /Q_0 sign (σ) . 211001bThe constant-Q model is appropriate for short evolution timescales, where the orbital and the spin frequencies do not change dramatically. §.§.§ Linear or weak friction model The linear or weak friction model assumes that the time delay, Δ t, between the maximal deformation and the perturbation is constant and small <cit.>. This is equivalent to assume that the Love distribution is a Dirac distribution <cit.> k_2 (t) = δ (t - Δ t) . 210929d As a result, we have (Eq. (<ref>)) k̂_2(σ) = ^- σΔ t ≈ (1 - σΔ t) , 210929f that is a(σ) = , and b(σ) = σΔ t . 210929c The time delay can be related to the Newton fluid relaxation time, Δ t =, with = 38 πη R^4/3 ^2 , 210929b where η is the viscosity of the fluid. The linear model is appropriate for bodies with small viscosities. It is also a good approximation of other tidal models when σ≈ 0. §.§.§ Maxwell model A material is called a Maxwell solid when it responds to stresses like a massless, damped harmonic oscillator. It is characterised by a rigidity μ (or shear modulus), and by a viscosity η. It is one of the simplest viscoelastic models, where the material behaves like an elastic solid over short time scales (η→∞), but flows like a fluid over long periods of time (μ→∞). The Love number for the Maxwell model is given by <cit.>: k̂_2 (σ) = 1 + σ/1 + στ , 210929a where = η / μ is the elastic or Maxwell relaxation time and τ = +. We thus have a(σ) = 1 + σ^2 τ/1 + (στ)^2 , and b(σ) = σ/1 + (στ)^2 . 210929zThe Maxwell model is appropriate for bodies with a rocky nature. For small tidal frequencies (στ≪ 1) it is similar to the linear model, but for high tidal frequencies (στ≫ 1) it becomes inversely proportional to the tidal frequency, σ. This feature gives rise to the appearance of non-synchronous spin-orbit resonances in this regime <cit.>. §.§.§ Andrade model The Andrade model is also a viscoelastic model, but more complex than the Maxwell one. It is derived from laboratory measurements on the response of materials to stress. It can be represented by three mechanical elements combined in series, a spring, a dashpot and a spring-pot <cit.>. The Love number for the Andrade model is given by <cit.>: k̂_2 (σ) = /1 + μ̂(σ) , 210930a where μ̂(σ) = /[1 - (σ)^-1 + (σ)^-αΓ (1+α) ]^-1 , 210930b is the effective rigidity. The parameter α is an empirical adjustable parameter whose value depends on the material, we usually have 0.2 ≤α≤ 0.4. The quantity is the timescale associated with the Andrade creep and may be termed as the “Andrade” or the “anelastic” time. Then, a(σ) = [ 1 - A (σ) σ/ A (σ)^2 + B (σ)^2] , and b(σ) = B (σ) σ/ A (σ)^2 + B (σ)^2 , 210930c with A (σ) = (στ) [ 1 + |στ|^-α(/τ) (τ/)^αcos(απ/2) Γ (1+α) ] , 210930d and B (σ) = 1 + |στ|^1-α(/τ) (τ/)^αsin(απ/2) Γ (1+α) . 210930e The Andrade model is also appropriate for bodies with a rocky nature, which includes the effect of transient creep response. As a result, for high tidal frequencies (στ≫ 1) it dissipates more energy than in the case of the Maxwell model <cit.>. § AVERAGE OVER THE MEAN ANOMALY singav In general, tidal effects slowly modify the spin and the orbit of the central body, in a timescale much longer than the orbital period of the system. Therefore, we can average the equations of motion (Sect. <ref>) over the mean anomaly and obtain the equations for the secular evolution of the system due to tidal effects. In order to proceed with this calculation, we first need to expand the tidal torque (Eq. (<ref>)) and the derivative of the Laplace vector (Eq. (<ref>)) in Hansen coefficients (Eq. (<ref>)). For instance, for the last term in the component of the tidal torque we have (Eq. (<ref>)) - 3 /r^3 I_23 = 3 /2 r^3 I_23sinθsin (2 +2) = 3 /4 a^3 I_23sinθ ( ^ 2 X_^-3,2 - ^- 2 X_^-3,-2) ^ M . 211001c Then, we replace the expression of I_23 also expanded in Hansen coefficients (Eq. (<ref>)), and average over the mean anomaly, M, which is equivalent to retain only the terms with = -k: 211006a⟨ - 3 /r^3 I_23⟩_M = 3 ^2 R^5/32 a^6sin^2 θ( ^ 2 X_-k^-3,2 - ^- 2 X_-k^-3,-2) × [ k̂_2 (kn-) (2 X_k^-3,0 - (1+) ^ 2 X_k^-3,2 + (1-) ^- 2 X_k^-3,-2) + k̂_2 (kn+) (2 X_k^-3,0 + (1- ) ^ 2 X_k^-3,2 - (1+) ^- 2 X_k^-3,-2) ] . Finally, we decompose the Love number in its real and imaginary parts (Eq. (<ref>)), make use of their parity properties, and use the simplification X_-k^-3,m = X_k^-3,-m (Eq. (<ref>)) to write 211006b⟨ - 3 /r^3 I_23⟩_M = 3 ^2 R^5/16 a^6sin^2 θ{ b (-kn) [ 2 cos 2 X_k^-3,0( X_k^-3,-2 - X_k^-3,2) - 2 cos 4 X_k^-3,-2 X_k^-3,2 + (1-x) (X_k^-3,-2)^2 + (1+x) (X_k^-3,2)^2 ] + 2 a (-kn) [ sin 2 X_k^-3,0( X_k^-3,-2 + X_k^-3,2) - sin 4 X_k^-3,-2 X_k^-3,2] } , This last arrangement of the Love number is very useful, because terms in a(σ) correspond to conservative contributions to the equations of motion, while the terms in b(σ) are responsible for the dissipation and consequent tidal evolution. In addition, we are able to combine terms in k̂_2 (± kn) in a single term k̂_2 (-kn). §.§ Tidal torque The tidal torque is responsible for the variations that occur in the angular momenta (Eqs. (<ref>) and (<ref>)). In the reference frame (, , ) its average value is obtained from expression (<ref>) as ⟨⟩_M = T_p + T_q + T_s . 211017a However, we verify there exist some interesting symmetries in the expression of the average torque, which simplifies if we introduce two additional projections ⟨⟩_M = T̂_p + T̂_q + T̂_s + T_4 + T_5 × , 211017b with T̂_p = T_p - T_4 · + T_5 · , T̂_q = T_q - T_5 · - T_4 · , T̂_s = T_s - T_4 · , 211017c or, T̂_p = T_p + T_4 - T_5 /sinθ , T̂_q = T_q + T_4 + T_5 /sinθ , T̂_s = T_s - T_4 , 211017c where we used the following notations: = · = cosθ , = · = - sinθsin , and = (×) · = - sinθcos . 211026a This rearrangement of the projections removes apparent singularities for e=0, since all terms in cos and sin are transferred to the Laplace vector (Eq. (<ref>)). Similarly, in order to remove the apparent singularities for θ=0 we finally write ⟨⟩_M = T_1 + T_2 + T_3 × + T_4 + T_5 × , 211017d with T_1 = T̂_q/sinθ , T_2 = ( T̂_s - cosθT̂_q/sinθ) , T_3 = T̂_p/sinθ . 211017e The expressions of the coefficients T_1, ... , T_5 do not present any singularities and can be obtained solely from the angular momentum and Laplace unit vectors as 211027t1 T_1 = - {332 [ 3 (1-^2) (()^2 - ()^2) - 2 (1-3 ^2-2 ^2) ( - ) ] + 316 [ 4 ^3 ()^2 + (1-)^2 (2+) ()^2 - (1+)^2 (2-) ()^2 + 4 (1-^2-^2) ( + ) - 2 (1-^2-4 ^2) ] + 316 [ 2 (1-^2) ()^2 + 12 (1-)^3 ()^2 - 12 (1+)^3 ()^2 + ((1-)^2 (1+2 )-2 (1-) ^2) - ((1+)^2 (1-2 )-2 (1+) ^2) + (1-^2-4 ^2) ] } , 211027t2 T_2 = {332 [ 3 (1-^2) (()^2 - ()^2) - 2 (1-3 ^2-6 ^2) ( - ) ] + 316 [ 4 ^2 ()^2 + (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 + 4 ^2 ( + ) + 4 (1-^2-2 ^2) ( - ) - 2 (1-^2 -4 ^2) ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 + (1+)^3 ()^2 + 4 (1-^2) ( + ) - 2 (3-^2-2 ^2) ( - ) + 2 (1-^2-4 ^2) ] } , 211027t3 T_3 = - {316 [ 2 (1-3 ^2) ()^2 + 32 (1-^2) (()^2 + ()^2) - 2 (2-3 (^2+^2)) ( + ) + 3 (1-^2-4 ^2) ] - 316 [ 4 (1-2 ^2) ()^2 - (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 + 2 ((1-) (1--4 ^2)-2 (1-2 ) ^2) - 2 ((1+) (1+-4 ^2)-2 (1+2 ) ^2) + 4 (1-^2-4 ^2) ] + 316 [ 2 (1-^2) ()^2 + 12 (1-)^3 ()^2 - 12 (1+)^3 ()^2 + ((1-)^2 (1+2 )-2 (1-) ^2) - ((1+)^2 (1-2 )-2 (1+) ^2) + (1-^2-4 ^2) ] } , 211027t4 T_4 = - {34 ( - ) + 34 [ (1-^2) ( + ) - 2 (1-^2-2 ^2) ] + 38 [ (1-)^2 + (1+)^2 + 2 (1-^2-2 ^2) ] } , 211027t5 T_5 = - {38 [ (1-3 ^2) ( + ) - 6 (1-^2-2 ^2) ] - 32 [ (1-) - (1+) - 2 (1-^2-2 ^2) ] - 38 [ (1-)^2 + (1+)^2 + 2 (1-^2-2 ^2) ] } , where = ^2 R^5/a^6 . 211029c We note that terms in = · only appear combined with the product of two different Hansen coefficients. This product is proportional to e^2 (Table <ref>) and therefore there is no problem if we are unable to accurately obtain from to compute when e ≈ 0. We also note that the coefficients T_1, T_2, and T_4 solely depend on b(σ), thus contributing to a secular evolution of the angular momenta, while the coefficients T_3 and T_5 solely depend on a(σ), thus resulting only in a precession of the angular momentum vectors. §.§ Laplace vector The Laplace vector is responsible for the variations that occur in the eccentricity and argument of the pericentre (Eq. (<ref>)). As for the torque (Eq. (<ref>)), we first need to express in the reference frame (, , ), such that we can write the expression of using Love numbers and the Hansen coefficients (Eq. (<ref>)). However, since the Laplace vector is attached to the orbital plane (Fig. <ref>), its expression becomes simpler if we project it on the frame (, , ), which can be done by inverting expression (<ref>). Moreover, two of the projections in this new frame directly give the variation of the eccentricity and the argument of the pericentre. Therefore, after averaging over the mean anomaly and rewriting the Hansen coefficients in the format X_k^-3,m (appendix <ref>), we get ⟨⟩_M = ė + + e × , 211026b with 211026z1ė = - /e{38 [ (1-3 ^2) ( + ) - 6(1-^2-2^2) ] - 32 [ (1-) - (1+) - 2 (1-^2-2^2) ] - 38 [ (1-)^2 + (1+)^2 + 2 (1-^2-2^2) ] - 132 [ 2 (1-3^2)^2 ()^2 k + 9 (1-^2)^2 ()^2 (1+ k/2) - 9 (1-^2)^2 ()^2 (1- k/2) - 6 (1-3^2) (1-^2-2^2) (1+k ) + 6 (1-3^2) (1-^2-2^2) (1-k ) + 9 ((1-^2)^2-8 (1-^2-^2)^2) k ] - 316 [ 4 (1-^2) ^2 ()^2 k + (1-^2) (1-)^2 ()^2 (2+k ) - (1-^2) (1+)^2 ()^2 (2-k ) + 4(1-) (1-^2-2 ^2) (1+k ) + 4(1+) (1-^2-2 ^2) (1-k ) - 2 ((1-^2)^2-8 (1-^2-^2) ^2) k ] - 364 [ 4 (1-^2)^2 ()^2 k + (1-)^4 ()^2 (2+k ) - (1+)^4 ()^2 (2-k ) + 4 (1-)^2 (1-^2-2 ^2) (1+k ) - 4 (1+)^2 (1-^2-2 ^2) (1-k ) + 2 ((1-^2)^2-8 (1-^2-^2) ^2) k ] } , 211026z2 = - e/{332 [ 4 (1-3 ^2) ()^2 + 3 (1-^2) (()^2 + ()^2) - 4 (2-3 (^2+^2)) ( + ) + 6 (1-^2-4 ^2) ] - 316 [ 4 (1-2 ^2) ()^2 - (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 + 2 ((1-) (1--4 ^2)-2 (1-2 ) ^2) - 2 ((1+) (1+-4 ^2)-2 (1+2 ) ^2) + 4 (1-^2-4 ^2) ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 - (1+)^3 ()^2 + 2 ((1-)^2 (1+2 )-2 (1-) ^2) - 2 ((1+)^2 (1-2 )-2 (1+) ^2) + 2 (1-^2-4 ^2) ] + 332 [ 3 (1-^2) (()^2 - ()^2) - 2 (5- 3 (^2+2 ^2)) ( - ) ] + 316 [ 4 ^2 ()^2 + (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 - 4 ((1-) (1-^2)-(1-2 ) ^2) - 4 ((1+) (1-^2)-(1+2 ) ^2) + 2 (3-3 ^2-4 ^2) ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 + (1+)^3 ()^2 + 2 ((1-)^2-2 (1-) ^2) - 2 ((1+)^2+2 (1+) ^2) - 2 (3-3^2-4 ^2) ] } , 211026z3 e _a = - /e {332 [ 2 (^2-^2-3 ^2 (^2-^2)) e^2 ()^2 + 32 ((1-^2)^2 (2+k (1-e^2)^3/2)-(^2 (1-^2)+(3-5 ^2-2 ^2) ^2) e^2) ()^2 + 32 ((1-^2)^2 (2-k (1-e^2)^3/2)-(^2 (1-^2)+(3-5 ^2-2 ^2) ^2) e^2) ()^2 - 2 ((1-^2 (4-3 ^2)-2 ^2 (1-3 ^2))+(^2-^2) (1-3 ^2) e^2) ( + ) - (1-^2 (4-3 ^2)-2 ^2 (1-3 ^2)+4 ^2 (1-^2-^2)) ( - ) k (1-e^2)^3/2 + 3 (2 (1-^2)^2-16 ^2 (1-^2-^2)+(^2 (5-6 ^2)-^2 (1-^2+3 ^2)) e^2) - 2 (1-^2-^2 (5-6 ^2-3 ^2)) ( + ) e + 3 ((1-3 ^2+2 ^4)-^2 (3-2 ^2-5 ^2)) ( + ) ( + ) e ] - 18 [ 2 (3 (1-^2+^2) e^2+(1-^2-2 ^2) k (1-e^2)^3/2) ()^2 - 32 ((1-)^2 (1-^2) (2+k (1-e^2)^3/2)-(1-2 ^2 (1-^2)-3 ^2-^2 (2-5 ^2-^2)) e^2) ()^2 - 32 ((1+)^2 (1-^2) (2-k (1-e^2)^3/2)-(1-2 ^2 (1-^2)+3 ^2-^2 (2-5 ^2-^2)) e^2) ()^2 + 2 (3 ^2 (1-^2-2 ^2)+3 ^2 (1+^2-^2) e^2-(2-2 ^2-3 ^2) k (1-e^2)^3/2) ( + ) - (6 (1-^2-2 ^2)-3 (2-2 ^2-5 ^2) e^2 -(1-4 ^2 (1-^2)+^2 (2-3 ^2-2 ^2)) k (1-e^2)^3/2) ( - ) + 3 (2 (1-^2)^2-16 ^2 (1-^2-^2)-(1-6 ^2 (1-^2)+^2 (3 ^2-^2)) e^2) + 6 ^2 (2-2 ^2-^2) ( + ) e - 3 (2 ^2 (1-^2)-(2-3 ^2)+^2 (2+2 -2 ^2-5 ^2)) ( + ) e - 3 (2 ^2 (1-^2)+(2-3 ^2)+^2 (2-2 -2 ^2-5 ^2)) ( + ) e ] + 132 [ 2 (2 (1-^2-2 ^2) k (1-e^2)^3/2-3 (2-^2-^2 (3-^2+^2)) e^2) ()^2 + 32 ((1-)^4 (2+k (1-e^2)^3/2)-(2-^2 (1+2 ^2)-6 (1-^2)+^2 (5-5 ^2-^2)) e^2) ()^2 + 32 ((1+)^4 (2-k (1-e^2)^3/2)-(2-^2 (1+2 ^2)+6 (1-^2)+^2 (5-5 ^2-^2)) e^2) ()^2 + 2 (3 (1-2 ^2-^2 (^2+2 ^2))-3 (2-^2 (3-^2)-^2 (1+^2)) e^2 -2 (1-3 ^2-2 ^2) k (1-e^2)^3/2) ( + ) - (12 (1-^2-2 ^2)-6 (3-2 ^2-5 ^2) e^2 -(3-2 ^2 (5-2 ^2)-^2 (4+2 ^2+3 ^2)) k (1-e^2)^3/2) ( - ) + 3 (2 (1-^2)^2-16 ^2 (1-^2-^2)-(2-9 ^2+6 ^4-^2 (3+^2-3 ^2)) e^2) - 6 (1+^2-^2 (3-2 ^2-^2)) ( + ) e - 3 (1-^2 (1+2 ^2)-2 (1-3 ^2)-^2 (1-4 +2 ^2+5 ^2)) ( + ) e - 3 (1-^2 (1+2 ^2)+2 (1-3 ^2)-^2 (1+4 +2 ^2+5 ^2)) ( + ) e ] } , e _b = /e {116 [ 2 (1-3 ^2) ()^2 k (1-e^2)^3/2 - 92 (1-2 ^2-^2) e^2 (()^2 - ()^2) - 3 (2 (1-3 ^2)-(1-3 ^2) e^2) ( - ) - 6 (1-2 ^2-^2) ( + ) k (1-e^2)^3/2 + 18 (1-^2-2 ^2) k (1-e^2)^3/2 - 9 (1-2 ^2-^2) ( + ) ( - ) e ] - 18 [ 2 (3 e^2-2 k (1-e^2)^3/2) ()^2 - 32 (1+3 -4 ^2-2 ^2) e^2 ()^2 + 32 (1-3 -4 ^2-2 ^2) e^2 ()^2 - 2 (6 +3 e^2+(1-4 ^2-2 ^2) k (1-e^2)^3/2) ( + ) + 3 (4 ^2+(1-2 ^2) e^2-2 k (1-e^2)^3/2) ( - ) + 3 (4 (1-^2-2 ^2) k (1-e^2)^3/2-3 e^2) - 6 ( + ) e - 3 (1+3 -4 ^2-2 ^2) ( + ) e + 3 (1-3 -4 ^2-2 ^2) ( + ) e ] - 116 [ 2 ((1+^2) k (1-e^2)^3/2-3 e^2) ()^2 - 32 (1-3 +2 ^2+^2) e^2 ()^2 + 32 (1+3 +2 ^2+^2) e^2 ()^2 + 2 (6 +3 e^2-(1+2 ^2+^2) k (1-e^2)^3/2) ( + ) - 3 (2 (1+^2)+(1-^2) e^2-2 k (1-e^2)^3/2) ( - ) - 3 (2 (1-^2-2 ^2) k (1-e^2)^3/2-3 e^2) + 6 ( + ) e - 3 (1-3 +2 ^2+^2) ( + ) e + 3 (1+3 +2 ^2+^2) ( + ) e ] } , where = _a + _b , 211026y and = n ( /) ( R/a)^5 = /β n a^2 . We note that for the ė and e projections of the Laplace vector, all terms in the series appear combined with one of the following combinations of the Hansen coefficients: X_k^-3,m X_k^-3,m' with m m', e X_k^-3,± 1 X_k^-3,m, e^2 (X_k^-3,m)^2, k (X_k^-3,0)^2, or (2∓ k) (X_k^-3,± 2)^2. As a result, all these terms are proportional to e^2 for all values of k (Table <ref>), which suppresses the potential singularity at e=0 introduced by the factor e^-1 in their amplitudes. It also ensures that there is no problem with the terms in and if we are unable to accurately obtain from when e ≈ 0 (Eq. (<ref>)). §.§ Orbital and spin evolution oase1 The set of equations (<ref>) and (<ref>) allow us to track the evolution of the averaged system using the angular momentum and the Laplace vectors (Sect. <ref>). For a more intuitive description of the orbital and spin evolution, we can relate these vectors with the rotation and orbital elliptic elements. The eccentricity can be directly obtained from the Laplace vector, e = √(·) , 211027a the semi-major axis from the orbital angular momentum (Eq. (<ref>)), a = ·/β^2 μ (1-e^2) , 211027b and the rotation rate from the rotational angular momentum (Eq. (<ref>)), = √(·)/C . 211027c The argument of the pericentre is given by expression (<ref>), while the angle between the orbital and equatorial planes, can be obtained from both angular momentum vectors as cosθ = · = ·/√((·) (·)) . 211027z For a better comparison with previous studies, we can also obtain the evolution of all these quantities. The eccentricity and the argument of the pericentre evolution are already given by two projections of the Laplace vector, namely by ė (Eq. (<ref>)) and (Eq. (<ref>)), respectively: ė = · , and = · (×) / e . 211027d The semi-major axis evolution is given from expressions (<ref>), (<ref>) and (<ref>), ȧ = 2 ·/β^2 μ (1-e^2) + 2 a ·/(1-e^2) = 2 (T_1 + T_2 - T_5 )/β n a √(1-e^2) + 2 a e ė/(1-e^2) , 211027e or, making use of expressions (<ref>), (<ref>), (<ref>), and (<ref>), we have 211028cȧ/a = {116 k [ 2 (1-3 ^2)^2 ()^2 + 92 (1-^2)^2 (()^2 + ()^2) - 6 (1-3 ^2) (1-^2-2 ^2) ( + ) + 9 ((1-^2)^2-8 (1-^2-^2) ^2) ] + 38 k [ 4 (1-^2) ^2 ()^2 + (1-^2) (1-)^2 ()^2 + (1-^2) (1+)^2 ()^2 + 4 (1-) (1-^2-2 ^2) - 4 (1+) (1-^2-2 ^2) - 2 ((1-^2)^2-8 (1-^2-^2) ^2) ] + 332 k [ 4 (1-^2)^2 ()^2 + (1-)^4 ()^2 + (1+)^4 ()^2 + 4 (1-)^2 (1-^2-2 ^2) + 4 (1+)^2 (1-^2-2 ^2) + 2 ((1-^2)^2-8 (1-^2-^2) ^2) ] } . For the rotation rate evolution, we have from expressions (<ref>) and (<ref>), = - ·/C = - T_1 + T_2 + T_4 /C , 211028a that is, using expressions (<ref>), (<ref>), and (<ref>), 211028b = - /C{316 [ (1-)^2 (1-^2) ()^2 + (1+)^2 (1-^2) ()^2 - 2 ((1-^2)^2-8 (1-^2-^2) ^2) + 4 ^2 (1-^2) ()^2 + 4 (1-) (1-^2-2 ^2) - 4 (1+) (1-^2-2 ^2) ] + 332 [ (1-)^4 ()^2 + (1+)^4 ()^2 + 2 ((1-^2)^2-8 (1-^2-^2) ^2) + 4 (1-^2)^2 ()^2 + 4 (1-^2-2 ^2) (1-)^2 + 4 (1-^2-2 ^2) (1+)^2 ] } . The obliquity (or inclination) evolution is given from expressions (<ref>), (<ref>) and (<ref>), 211028zθ̇ = · - · cosθ/sinθ√(·) - · - · cosθ/sinθ√(·) = T_1 sin^2 θ - T_4 - T_5 /C sinθ - T_2 sin^2 θ + T_4 + T_5 /β√(μ a (1-e^2))sinθ . We observe that there are two distinct contributions to the evolution of this angle. When | | ≪ | |, the evolution is dominated by the first term, and θ̇ is identified as a variation of the obliquity. This is, for instance, the case of a planet around a star. On the other hand, when | | ≫ | |, the evolution is dominated by the second term, and θ̇ is recognised as a variation in the orbital inclination. This is, for instance, the case of a small satellite close to its planet. Previous studies often neglect one of these two contributions, and therefore, the evolution of this angle is incomplete. Finally, we can also obtain the evolution of the precession angles, that is, the angular velocity of the longitude of the node, Ω̇, and the precession speed of the spin axis, ψ̇. The line of nodes is aligned with the vector (Fig. <ref>) and thus Ω̇= /||· = ·/|| = T_3 sin^2 θ - T_4 + T_5 /β√(μ a (1-e^2))sinθ , 211029aψ̇= /||· = - ·/|| = - T_3 sin^2 θ - T_4 + T_5 /C sinθ . 211029b §.§ Energy dissipation The total energy released inside the body due to tides is given by Ė = - (Ė_ orb + Ė_ rot ) , 211029d where E_ orb = - βμ/2 a and E_rot = ·/2211029e are the orbital energy, and the rotational energy, respectively. Then Ė_ orb = βμ/2 a^2ȧ and Ė_ rot = C , 211029f where ȧ and are given by expressions (<ref>) and (<ref>), respectively. § AVERAGE OVER THE ARGUMENT OF THE PERICENTRE doubav Although we usually have ≪ n, the argument of the pericentre can often also be considered a fast varying angle when compared to the secular tidal evolution of the remaining spin and orbital elements. This is particularly true when additional sources of precession are taken into account in the problem, such as the rotational deformation, general relativity corrections and additional perturbing bodies in the system. As a result, we can perform a second average of the equations of motion, this time over the angle . Adopting again as example the last term in the component of the tidal torque, we have (Eq. (<ref>)) ⟨ - 3 /r^3 I_23⟩_M, = 3 ^2 R^5/16 a^6sin^2 θ[ (1-x) (X_k^-3,-2)^2 + (1+x) (X_k^-3,2)^2 ] , 211103f which considerably simplifies the expression of the equations of motion. In this particular case, we note that there is no longer the contribution from . Another simplification is that we do not need to follow the evolution of the Laplace vector anymore. Indeed, the pericentre is no longer defined when we average over , and the only projection of interest is the one giving the evolution of the eccentricity, . Therefore, the equations of motion in this simplified case can be given by the torque together with ė. Alternatively, we prefer to use the evolution of the orbital energy, since it can be obtained directly from the potential energy (Eq. (<ref>)) and thus provides a simpler expression (from which we can later easily derive ė). §.§ Tidal torque When we average the tidal torque (Eq. (<ref>)) over , the projections that depend on the Laplace vector average to zero and thus ⟨⟩_M, = T_1 + T_2 + T_3 × , 211110a with 211110t1T_1 = - {932 [ (1-^2) (()^2 - ()^2) ] + 316 [ 4 ^3 ()^2 + (1-)^2 (2+) ()^2 - (1+)^2 (2-) ()^2 ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 - (1+)^3 ()^2 ] } , 211110t2T_2 = {932 [ (1-^2) (()^2 - ()^2) ] + 316 [ 4 ^2 ()^2 + (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 + (1+)^3 ()^2 ] } , 211110t3T_3 = - {332 [ 4 (1-3 ^2) ()^2 + 3 (1-^2) (()^2 + ()^2) ] - 316 [ 4 (1-2 ^2) ()^2 - (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 - (1+)^3 ()^2 ] } . <cit.>[Note that the b(σ) functions have a slightly different definition. In <cit.> it is defined as the imaginary part of k̂_2 (σ), while in our case it is defined as the opposite of it (Eq. (<ref>)).] obtained an equivalent expression for the double averaged tidal torque (Eq.(<ref>)). However, they only consider the contribution to the spin evolution (Eq. (<ref>)), while here, we also apply the torque to the orbital evolution (Eq. (<ref>)). Moreover, to obtain the complete tidal evolution we additionally need to consider the evolution of the orbital energy (Eq. (<ref>)). §.§ Orbital energy The evolution of the orbital energy is obtained from the work of the tidal force (Eq. (<ref>)) as Ė_ orb = ⟨·⟩_M, = - ⟨∂ U/∂ t⟩_M, . 211110b The first approach provides the Hansen coefficients in the format X_k^-4,m, which can be put into the format X_k^-3,m using the relations provided in appendix <ref>. The second approach is easier to compute and already provides the Hansen coefficients in the format X_k^-3,m. Then, 211110t4Ė_ orb = n {164 k [ 4 (1-3 ^2)^2 ()^2 + 9 (1-^2)^2 (()^2 + ()^2) ] + 316 k (1-^2) [ 4 ^2 ()^2 + (1-)^2 ()^2 + (1+)^2 ()^2 ] + 364 k [ 4 (1-^2)^2 ()^2 + (1-)^4 ()^2 + (1+)^4 ()^2 ] } . §.§ Orbital and spin evolution oase2 The set of equations (<ref>) and (<ref>) allow us to track the evolution of the averaged system using the angular momentum vectors and the orbital energy. As in Sect. <ref>, we can relate these quantities with the orbital and spin parameters. The semi-major axis is directly given from the orbital energy a = - βμ/2 E_ orb , 211110f and the eccentricity from the orbital angular momentum (Eq. (<ref>)) e = √(1 - ·/β^2 μ a) . 211110g The rotation rate, , is obtained from the rotational angular momentum (Eq. (<ref>)), and the angle between the orbital and equatorial planes, θ, from both angular momentum vectors (Eq. (<ref>)). As in Sect. <ref>, we can also obtain the explicit evolution of all these quantities. The semi-major axis evolution is given from expression (<ref>) ȧ = 2 a^2/βμ Ė_ orb , 211110c while the eccentricity evolution can be computed from expressions (<ref>), (<ref>) and (<ref>) as ė = 1-e^2/2 a e ȧ - ·/β^2 μ a e = /β n a^2 e( /n Ė_ orb - T_1 - T_2 ) , 211110d that is, using expressions (<ref>), (<ref>), and (<ref>), 211110eė = /e{164 [ 4 (1-3^2)^2 ()^2 k + 9 (1-^2)^2 ()^2 (2+ k ) - 9 (1-^2)^2 ()^2 (2- k ) ] + 316 [ 4 (1-^2) ^2 ()^2 k + (1-^2) (1-)^2 ()^2 (2+k ) - (1-^2) (1+)^2 ()^2 (2-k ) ] + 364 [ 4 (1-^2)^2 ()^2 k + (1-)^4 ()^2 (2+k ) - (1+)^4 ()^2 (2-k ) ] } . For the rotation rate evolution, we have from expressions (<ref>) and (<ref>), = - ·/C = - T_1 + T_2/C , 211110h that is, using expressions (<ref>) and (<ref>) 211110t5 = - /C{316 (1-^2) [ 4 ^2 ()^2 + (1-)^2 ()^2 + (1+)^2 ()^2 ] + 332 [ 4 (1-^2)^2 ()^2 + (1-)^4 ()^2 + (1+)^4 ()^2 ] } . The obliquity (or inclination) evolution is given from expressions (<ref>) and (<ref>), θ̇= ( T_1/C - T_2/β√(μ a (1-e^2))) sinθ . 211110i Finally, for the angular velocity of the longitude of the node and for the precession speed of the spin axis, we get from expressions (<ref>), (<ref>) and (<ref>), respectively, Ω̇= · (×)/|| sinθ = T_3 sinθ/β√(μ a (1-e^2)) , 211110jψ̇= - · (×)/|| sinθ = - T_3 sinθ/C . 211110k §.§ Energy dissipation The total energy released inside the body due to tides is given by expressions (<ref>) and (<ref>). When we average over the argument of the pericentre, by combining expressions (<ref>) and (<ref>) we obtain for the total energy dissipated 211110lĖ = {164 (-k n) [ 4 (1-3 ^2)^2 ()^2 + 9 (1-^2)^2 (()^2 + ()^2) ] + 316 (-k n) (1-^2) [ 4 ^2 ()^2 + (1-)^2 ()^2 + (1+)^2 ()^2 ] + 364 (2 -k n) [ 4 (1-^2)^2 ()^2 + (1-)^4 ()^2 + (1+)^4 ()^2 ] } . § PLANAR CASE planarcase The final outcome of tidal dissipation is the alignment of the spin axis with the normal to the orbit <cit.>. Therefore, in order to simplify the equations of motion, many works assume that this alignment is always present, that is, the motion is planar (θ = 0). Indeed, in this case we have = and thus (Eq. (<ref>)): = 1 , = 0 , and = 0 . 211028g §.§ Tidal torque Using the simplifications (<ref>) in expressions (<ref>) and (<ref>) yields T_4 = T_5 = 0. In addition, since we also have × = 0, we get for the average tidal torque (Eq. (<ref>)) ⟨⟩_M = (T_1 + T_2) = T_s , 211028h with T_s = 32 ()^2 . 211028j We note that, since = = 0, the expression of the tidal torque is the same whether we perform a single average over the mean anomaly (Eq. (<ref>)) or if we additionally average over the argument of the pericentre (Eq. (<ref>)). §.§ Laplace vector Using the simplifications (<ref>) in expression (<ref>) yields = 0. The averaged Laplace vector then becomes (Eq. (<ref>)) = ė + × , 211029z with ė = /4 e{()^2 k - 3 ()^2 (2-k ) } , 211029y and = /e^2 {316 [ 2 e^2 ()^2 + e^2 ( + ) + 2 e ( + ) ] - 116 [ ( 12 (2-k (1-e^2)^3/2)- 9 e^2) ()^2 + 3 e^2 + ( 4 k (1-e^2)^3/2 -6 e^2 ) +6 e ( + ) ] } . We note that in the planar case, ė only depends on b(σ), and only depends on a(σ). Moreover, the expression of ė does not change if we further average over the argument of the pericentre. §.§ Orbital and spin evolution For the semi-major axis, we have from expressions (<ref>), (<ref>) and (<ref>), ȧ/a = k/2[ ()^2 + 3 ()^2 ] , 211103a while for the rotation rate, we get from expressions (<ref>) and (<ref>), = - /C32 ()^2 . 211103b Again, these expressions do not change if we further average over the argument of the pericentre. As expected, the evolution of the obliquity (or inclination) is simply given by θ̇=0 (Eq. (<ref>)), since T_4 = T_5 = 0 and sinθ = 0, that is, the motion remains planar. §.§ Energy dissipation The orbital and rotational energy variations can be obtained from expression (<ref>) together with expressions (<ref>) and (<ref>), respectively Ė_ orb = kn/4[ ()^2 + 3 ()^2 ] , 211103cĖ_ rot = - 3 2 ()^2 . 211103d The total energy released inside the body due to tides is then (Eq. (<ref>)) Ė = 1/4[ (-kn) ()^2 + 3 (2 - kn) ()^2 ] . 211103e § LINEAR MODEL linearapprox In Sect. <ref>, we present the tidal models most commonly used in the literature, and how they can be expressed in terms of the second Love number, k̂_2 (σ). The linear or weak friction model is widely used because it is an approximation of any viscoelastic model for small relaxation times (στ≪ 1) and provides simple expressions for the equations of motion that are valid for any eccentricity value. Indeed, since a(σ) is constant and b(σ) ∝σ (Eq. (<ref>)), the sum of the Hansen coefficients products can be evaluated as (see appendix <ref>): X_k^-3,m X_k^-3,n = X_0^-6,m-n , k X_k^-3,m X_k^-3,n = /2 (m+n) X_0^-8,m-n , k^2 X_k^-3,m X_k^-3,n = 9 ( 2 X_0^-9,m-n - X_0^-8,m-n) + ( 3/8 (m-n)^2 + m n - 9 ) (1-e^2) X_0^-10,m-n , where the coefficients X_0^-ℓ,m can be obtained as exact functions of the eccentricity (Eq. (<ref>)) 211128b X_0^-ℓ,m (e) = 1/2 π∫_-π^π( a/r)^ℓ^ m d M = 1/π (1-e^2)^ℓ-3/2∫_0^π( 1+e cos)^ℓ-2cos (m ) d = 1/(1-e^2)^ℓ-3/2∑_k=0^(ℓ-m-2)/2(ℓ-2)!/k! (m+k)!(ℓ-2-m-2k)!( e/2)^m+2k . We provide the explicit expression of all needed X_0^-ℓ,m (e) coefficients in Table <ref>. The equations of motion that we obtain here are in perfect agreement we those derived in <cit.> and <cit.> using a different approach, and so for clearness reasons we keep the exact same notations for the eccentricity functions. §.§ Average over the mean anomaly §.§.§ Tidal torque The averaged tidal torque is given by expression (<ref>). In the linear approximation, the coefficients T_3 = T_5 = 0, and thus, we obtain 211130a⟨⟩_M = [ ( f_4(e) /2ncosθ - f_2(e) ) + ( f_1(e) - 1/2 f_4(e) ) /n + ( f_4(e) - f_1(e) ) /n (·) ] , with f_1(e) = X_0^-6,0 (e) = 1 + 3 e^2 + 3/8 e^4/(1-e^2)^9/2 , 211130e f_2(e) = X_0^-8,0 (e) = 1 + 15/2 e^2 + 45/8 e^4 + 5/16 e^6/(1-e^2)^6 , 211130f f_4 (e) = X_0^-6,0 (e) - X_0^-6,2 (e)/ = 1 + 3/2 e^2 + 1/8 e^4/(1-e^2)^5 , 211130g and = 3 n Δ t = 3 ^2 R^5/a^6 n Δ t . 211130h §.§.§ Laplace vector The averaged Laplace vector is given by expression (<ref>). In the linear approximation, we obtain 211201a⟨⟩_M = [ ( 11 f_4(e) /2 ncosθ - 9 f_5(e) ) - f_4(e) /2n (·) ] + 15/2 f_4(e) × , with f_5 (e) = X_0^-8,0 (e) - X_0^-8,2 (e) = 1 + 15/4 e^2 + 15/8 e^4 + 5/64 e^6/(1-e^2)^13/2 , 211201b and = 3 n Δ t = /β n a^2 = 3 ^2 R^5/β a^8 Δ t . 211201c §.§.§ Orbital evolution The evolution of the eccentricity and the argument of the pericentre in the linear approximation are already given by the first and third terms of the Laplace vector (Eq. (<ref>)), respectively. For the semi-major axis, we have from expression (<ref>) ȧ/a = 2 [ f_2 (e) /ncosθ - f_3 (e) ] , 211201d with f_3 (e) = 6 X_0^-9,0 (e) - 3 X_0^-8,0 (e) - 2 (1-e^2) X_0^-10,0 (e) = 1 + 31/2 e^2 + 255/8 e^4 + 185/16 e^6 + 25/64 e^8/(1-e^2)^15/2 . 211201e §.§ Average over the argument of the pericentre The eccentricity evolution (Eq. (<ref>), first term) and the semi-major axis evolution (Eq. (<ref>)) do not depend on the pericentre. In addition, the evolution of the pericentre (Eq. (<ref>), last term) does not depend on the dissipation (Δ t), and therefore evolves on a shorter timescale than the orbit. As a result, we can further average the equations of motion over the argument of the pericentre in order to get simpler expressions for the torque and spin evolution. §.§.§ Tidal torque The averaged tidal torque is now given by expression (<ref>). In the linear approximation, it becomes ⟨⟩_M, = [ f_1(e) /2n( + cosθ ) - f_2(e) ] . 211201f §.§.§ Spin evolution For the rotation rate, we get from expression (<ref>) = - /C[ f_1(e) /2n( 1 + cos^2 θ) - f_2(e) cosθ] , 211201f and for the obliquity (or inclination) evolution we have from expression (<ref>), θ̇= /C [f_1(e) /2ncosθ - f_2(e) ] sinθ - /√(1-e^2) f_1(e) /2nsinθ . 211206a §.§ Energy dissipation The orbital and rotational energy variations can be obtained from expression (<ref>) together with expressions (<ref>) and (<ref>), respectively. The total energy released inside the body due to tides is then (Eq. (<ref>)) Ė = n [ 1/2 f_1(e) ( /n)^2 ( 1 + cos^2 θ) - 2 f_2(e) /ncosθ + f_3(e) ] . 211206d § CONCLUSION sectconc In this paper, we revisit the tidal evolution of a body disturbed by a point mass companion. We derive the equations of motion in a vectorial formalism, where the basis depend only on the unit vectors of the spin and orbital angular momenta and on the Laplace unit vector. These vectors are related to the spin and orbital quantities, thus easy to obtain and independent of the chosen frame. We provide the expressions of the equations of motion for a single average over the mean anomaly, and also for an additional average over the argument of the pericentre. We show that in both cases, the equations depend only on series of Hansen coefficients, that are widely used in celestial mechanics. Our method is valid for any rheological model, which appears in the equations of motion through the second Love number. In our model, we use the quadrupolar approximation to obtain the tidal potential (Eq. (<ref>)), that is, we neglect terms in (R/r)^3. This approximation is usually suitable to study the long-term evolution of a large variety of systems, such as planet-satellite, planet-star or stellar binaries. However, for extremely close-in bodies with very asymmetric shapes, such as Phobos (the main satellite of Mars) or binary asteroids, a high precision model of its tidal dynamics may require to include octupole or higher order terms in the tidal potential, as well as the knowledge of higher degree Love numbers <cit.>. In a more general two body problem, both bodies are expected to undergo tidal evolution. As long as we keep the quadrupolar approximation, the cross terms of interaction can be neglected <cit.>. As a result, we only need to take into account a second contribution to the potential energy (Eq. (<ref>)), where is replaced by , and is replaced by _0 (which pertains to the body with mass ). Therefore, we get additional similar expressions for the tidal force (Eq. (<ref>)) and for the torque (Eq. (<ref>)), where (, , ) are replaced by (_0, _0, _0) as they now also correspond to the body with mass . The equations of motion for the spin of are thus analogous to those for (Eq. (<ref>)), while for the equations of motion for the orbit (Eqs. (<ref>) and (<ref>)) we only need to add the contributions from the two bodies. The vectorial formalism presented in this paper is well suited to study the long-term evolution of celestial bodies. In addition to tidal effects, we usually need to consider the rotational deformation and general relativity corrections <cit.>. For multi-body systems, the secular interactions can be obtained either by developing the perturbing functions in terms of Legendre polynomials, suited for hierarchical systems <cit.>, or in terms of Laplace coefficients, suited for non-resonant compact systems <cit.>. We thank G. Boué for discussions. We are grateful to two anonymous referees for their insightful comments. This work was supported by CFisUC (UIDB/04564/2020 and UIDP/04564/2020), GRAVITY (PTDC/FIS-AST/7002/2020), PHOBOS (POCI-01-0145-FEDER-029932), and ENGAGE SKA (POCI-01-0145-FEDER-022217), funded by COMPETE 2020 and FCT, Portugal. § HANSEN COEFFICIENTS RELATIONS hcr From the definition of the Hansen coefficients (Eq. (<ref>)), a number of recurrence relations can be obtained. In this work we use the following ones <cit.>: X_k^ℓ,-m = X_-k^ℓ,m , 210915b (1-e^2) X_k^ℓ,m = X_k^ℓ+1,m + e/2[ X_k^ℓ+1,m-1 + X_k^ℓ+1,m+1] , 210915d 210915c k X_k^ℓ,m = m (1-e^2) X_k^ℓ-2,m + e ℓ/2 [ X_k^ℓ-1,m-1 - X_k^ℓ-1,m+1] = m X_k^ℓ-1,m + e/2[ (m+ℓ) X_k^ℓ-1,m-1 + (m-ℓ) X_k^ℓ-1,m+1] . The first relation (Eq. (<ref>)) allow us to obtain the coefficients X_k^-3,-1 and X_k^-3,-2 from the coefficients X_k^-3,1 and X_k^-3,2, respectively (Table <ref>). It also allow us to obtain any other coefficient with m<0 from X_k^ℓ,m with m>0. The second relation (Eq. (<ref>)) with ℓ = -4 provides 210924a m=0 ⇒ (1-e^2) X_k^-4,0 = e/2 X_k^-3,-1 + X_k^-3,0 + e/2 X_k^-3,1 , m=1 ⇒ (1-e^2) X_k^-4,1 = e/2 X_k^-3,0 + X_k^-3,1 + e/2 X_k^-3,2 , m=2 ⇒ (1-e^2) X_k^-4,2 = e/2 X_k^-3,1 + X_k^-3,2 + e/2 X_k^-3,3 . Finally, from the last relation (Eq. (<ref>)) with ℓ = -3 we get 210924b m=0 ⇒ √(1-e^2) k X_k^-3,0 = 3/2 e ( X_k^-4,1 - X_k^-4,-1) , m=1 ⇒ √(1-e^2) k X_k^-3,1 = 2 e X_k^-4,2 + X_k^-4,1 - e X_k^-4,0 , m=2 ⇒ √(1-e^2) k X_k^-3,2 = 5/2 e X_k^-4,3 + 2 X_k^-4,2 - 1/2 e X_k^-4,1 . Using these sets of relations, it is possible to express all Hansen coefficients appearing in this work solely as functions of X_k^-3,0, X_k^-3,1, and X_k^-3,2, using the following sequence: 210924c X_k^-4,3 = 1/5e[ e X_k^-4,1 - 4 X_k^-4,2 + 2 √(1-e^2) k X_k^-3,2] , X_k^-3,3 = 1/e[ 2 (1-e^2) X_k^-4,2 - 2 X_k^-3,2 - e X_k^-3,1] , X_k^-4,2 = 1/2e[ e X_k^-4,0 - X_k^-4,1 + √(1-e^2) k X_k^-3,1] , X_k^-4,1 = 1/1-e^2[ e/2 X_k^-3,2 +X_k^-3,1+e/2 X_k^-3,0] , X_k^-4,0 = 1/1-e^2[ e/2 X_k^-3,-1 +X_k^-3,0 +e/2 X_k^-3,1] . § HANSEN COEFFICIENTS COMBINATIONS hcsum We let = ( r/a)^ℓ^ m , and = ( r/a)^ℓ^- n , 211124d with derivatives, respectively, ' = - d /d M = m ( r/a)^ℓ-2^ m v - e ℓ( r/a)^ℓ-1^ (m+1) - ^ (m-1) /2 , 211124e' = d /d M = n ( r/a)^ℓ-2^- n - e ℓ( r/a)^ℓ-1^- (n+1) - ^- (n-1) /2 . 211124f Using the definition of the Hansen coefficients (Eq. (<ref>)) we have 211124g⟨⟩_M = X_k^ℓ,m X_k^ℓ,n = ⟨( r/a)^2ℓ^ (m-n) ⟩_M = X_0^2ℓ,m-n , 211124h⟨' ⟩_M = k X_k^ℓ,m X_k^ℓ,n = ⟨ m ( r/a)^2ℓ-2^ (m-n) - e ℓ/2 ( r/a)^2ℓ-1( ^ (m-n+1) - ^ (m-n-1) ) ⟩_M = m X_0^2ℓ-2,m-n - e ℓ/2 ( X_0^2ℓ-1,m-n+1 - X_0^2ℓ-1,m-n-1) = m+n/2 X_0^2ℓ-2,m-n , 211124j⟨' ' ⟩_M = k^2 X_k^ℓ,m X_k^ℓ,n = ⟨ℓ^2 e^2/4 (1-e^2)( r/a)^2ℓ-2( 2 ^ (m-n) - ^ (m-n+2) - ^ (m-n-2) ) ⟩_M + ⟨ (m-n) ℓ e/2( r/a)^2ℓ-3( ^ (m-n+1) - ^ (m-n-1) ) + m n (1-e^2) ( r/a)^2ℓ-4^ (m-n) ⟩_M = ℓ^2 e^2/4 (1-e^2)( 2 X_0^2l-2,m-n - X_0^2l-2,m-n+2 - X_0^2l-2,m-n-2) + (m-n) ℓ e/2( X_0^2l-3,m-n+1 - X_0^2l-3,m-n-1) + m n (1-e^2) X_0^2l-4,m-n = ℓ^2 ( 2 X_0^2l-3,m-n - X_0^2l-2,m-n) + ( ℓ (m-n)^2/2ℓ-2 + m n - ℓ^2 ) (1-e^2) X_0^2l-4,m-n , where to simplify expression (<ref>) we used equation (<ref>), and to simplify expression (<ref>) we used equations (<ref>) and (<ref>) together with <cit.>: (1-e^2)^2 X_k^ℓ,m = (1+e^2/2) X_k^ℓ+2,m + e [ X_k^ℓ+2,m+1 + X_k^ℓ+2,m-1] + e^2/4[ X_k^ℓ+2,m+2 + X_k^ℓ+2,m-2] . 211128a spbasic Gravitational tides are raised on a body by the star because of the gravitational gradient across the central body. Tidal effects arise from differential and inelastic deformation of a body by an external companion. Tidal effects result from a differential attraction of the gravitational force acting in accordance with laws of mechanics. The constant time lag model has been widely used because of its intuitive physical interpretation and also because the analytical expressions of the tidal force and torque expanded in first order in Δ t are very compact and not truncated in eccentricity <cit.>. the computation of secular equations. Let F(r,r) be a function of a position vector r and velocity r, its averaged expression over the mean anomaly (M) is given by F_M = 1/2π∫_0^2π F(r, r) d M . Depending on the case, this integral is computing using the eccentric anomaly (E), or the true anomaly (v) as an intermediate variable. The basic formulae are d M = r/ad E = r^2/a^2√(1-e^2)d v , r = a(cos E-e) e + a√(1-e^2)(sin E) k×e , r = rcos v e + rsin v k×e , r = na/√(1-e^2) k× (r + e) , r = a (1-ecos E) = a(1-e^2)/1+ecos v , where k is the unit vector of the orbital angular momentum, and e the Laplace-Runge-Lenz vector (Eq. <ref>). We have then 1/r^3 = 1/a^3(1-e^2)^3/2 , and rr/r^5 = 1/2a^3(1-e^2)^3/2(1-kk) , where u denotes the transpose of any vector u. This leads to 1/r^3P_2(r·u) = -1/2a^3(1-e^2)^3/2 P_2(k·u) , for any unit vector u. In the same way, r^2 = a^2(1+3/2e^2) , and rr = a^21-e^2/2(1-kk) + 5/2 a^2 ee , give r^2 P_2(r·u) = -a^2/2((1-e^2)P_2(k·u) - 5 e^2 P_2(e·u)) . The other useful formulae are 1/r^6 = 1/a^6 f_1(e) , 1/r^8 = 1/a^8√(1-e^2) f_2(e) , rr/r^8 = √(1-e^2)/2a^6 f_4(e) (1-kk) +6+e^2/4a^6(1-e^2)^9/2ee , r/r^8 = 5/21/a^7√(1-e^2) f_4(e) e , r/r^10 = 7/21/a^9(1-e^2) f_5(e) e , (r·r) r/r^10 = n/2a^7 √(1-e^2) f_5(e) k×e , where the f_i (e) functions are given by expressions (<ref>) to (<ref>). Finally, for the average over the argument of the pericentre (w), we can proceed in an identical manner: ee_w = 1/2 π∫_0^2 πee d w = e^2/2( 1 - kk) , which gives ( e·u) e_w = e^2/2( u - ( k·u ) k) . (, , ) = - 3 /r^3×[ (I_22-I_11) + (I_33-I_11) + I_12 ( + ) + I_13 ( + ) + I_23 ( + ) ] , 151028d or = - 3 /r^3[[ (I_33-I_22) - I_12 + I_13 + I_23 (^2 - ^2) (I_11-I_33) + I_12 + I_13 (^2 - ^2) - I_23(I_22-I_11) + I_12 (^2 - ^2) - I_13 + I_23 ]] . 151028e 211006b⟨ - 3 /r^3 I_23⟩_M = 3 ^2 R^5/32 a^6sin^2 θ ( ^ 2 X_-k^-3,2 - ^- 2 X_-k^-3,-2) { a(-kn) (2 x X_k^-3,0 - (1+x) ^ 2 X_k^-3,2 + (1-x) ^- 2 X_k^-3,-2) + a(-kn) (2 x X_-k^-3,0 + (1- x) ^ 2 X_-k^-3,2 - (1+x) ^- 2 X_-k^-3,-2) - b (-kn) (2 x X_k^-3,0 - (1+x) ^ 2 X_k^-3,2 + (1-x) ^- 2 X_k^-3,-2) + b (-kn) (2 x X_-k^-3,0 + (1- x) ^ 2 X_-k^-3,2 - (1+x) ^- 2 X_-k^-3,-2) } . ⟨ - 3 /r^3 I_23⟩_M = 3 ^2 R^5/32 a^6sin^2 θ ( ^ 2 X_-k^-3,2 - ^- 2 X_-k^-3,-2) {cosθ[ a ( + k n) + a ( - k n) ] (2 X_k^-3,0 - ^ 2 X_k^-3,2 - ^- 2 X_k^-3,-2) + [ a ( + k n) - a ( - k n) ] ( ^ 2 X_k^-3,2 - ^- 2 X_k^-3,-2) - cosθ[ b ( + k n) - b (-k n) ] (2 X_k^-3,0 - ^ 2 X_k^-3,2 - ^- 2 X_k^-3,-2) - [ b ( + k n) + b ( -k n) ] ( ^ 2 X_k^-3,2 - ^- 2 X_k^-3,-2) } . 211006a §.§ Average torque Txc[3] = 34 ^2 ( - ) + 38 [ 2 (1-^2) ^2 ()^2 + 12 (1-^2) (1-)^2 ()^2 + 12 (1-^2) (1+)^2 ()^2 + 2 (1-)^2 (^2++^2) + 2 (1+)^2 (^2-+^2) - (1-^2) (1-^2-4 ^2) ] + 316 [ 2 (1-^2)^2 ()^2 + 12 (1-)^4 ()^2 + 12 (1+)^4 ()^2 + 2 (1-)^2 (1-^2-^2) + 2 (1+)^2 (1-^2-^2) + (1-^2) (1-^2-4 ^2) ] this section is · = 3 /r^3[ (I_22-I_11) + I_12 (^2 - ^2) - I_13 + I_23] - I_13 + I_23/r^3 = I_13/2sinθcosθ 1-cos 2w/r^3 - I_23/2sinθ sin 2 w/r^3 I_13 = K(σ-ω)/2 ( I_13^ + I_23^) + K(σ+ω)/2 ( I_13^ - I_23^) I_23 = K(σ-ω)/2 ( I_23^ - I_13^) + K(σ+ω)/2 ( I_23^ + I_13^) I_13^ = - R^5/r^3 = R^5/2sinθ sin 2 w/r^3 I_23^ = - R^5/r^3 = R^5/2sinθcosθ 1-cos 2w/r^3 ( r/a)^n ^ m = ∑_k X_k^n,m^ k M = ∑_k X_-k^n,-m^ k M sin 2 w/r^3 = 1/2 a^3∑_k ( ^ 2 ϖ X_k^-3,2 - ^- 2 ϖ X_k^-3,-2) ^ k M 1-cos 2w/r^3 = /2 a^3∑_k ( 2 X_k^-3,0 - ^ 2 ϖ X_k^-3,2 - ^- 2 ϖ X_k^-3,-2) ^ k M 8 a^3 I_13/ R^5 = sinθ∑_k [ K(kn-ω) [ ( ^ 2 ϖ X_k^-3,2 - ^- 2 ϖ X_k^-3,-2) - cosθ( 2 X_k^-3,0 - ^ 2 ϖ X_k^-3,2 - ^- 2 ϖ X_k^-3,-2) ] + K(kn+ω) [ ( ^ 2 ϖ X_k^-3,2 - ^- 2 ϖ X_k^-3,-2) + cosθ( 2 X_k^-3,0 - ^ 2 ϖ X_k^-3,2 - ^- 2 ϖ X_k^-3,-2) ] ] ^ k M 8 a^3 I_23/ R^5 = sinθ∑_k [ K(kn-ω) ( cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) - ( e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) ) + K(kn+ω) ( cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) + ( e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) ) ] e^ k M 8 a^3 I_23/ R^5 = sinθ∑_k [ K(kn-ω) ( cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) - ( e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) ) ( e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) - K(-kn+ω) ( cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) + ( e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) ) ( e^- 2 ϖ X_k^-3,-2 - e^ 2 ϖ X_k^-3,2) ] 8 a^3 I_23/ R^5 = sinθ∑_k [ K(kn-ω) ( cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) - ( e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) ) ( e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) - K(-kn+ω) ( cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) + ( e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) ) ( e^- 2 ϖ X_k^-3,-2 - e^ 2 ϖ X_k^-3,2) ] ⟨I_13/2sinθcosθ 1-cos 2w/r^3⟩_M = R^5/32 a^6 sin^2 θcosθ ∑_k 2 K(ω-kn) [ ( e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) + cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) ] ( 2 X_-k^-3,0 - e^ 2 ϖ X_-k^-3,2 - e^- 2 ϖ X_-k^-3,-2) ⟨I_13/2sinθcosθ 1-cos 2w/r^3⟩_M,ϖ = R^5/32 a^6 sin^2 θcosθ∑_k 2 K(ω-kn) [ ((X_k^-3,2)^2 - (X_k^-3,-2)^2 ) + cosθ( 4( X_k^-3,0)^2 + (X_k^-3,2)^2 + (X_k^-3,-2)^2 ) ] ⟨ - I_23/2sinθ sin 2w/r^3⟩_M = R^5/32 i a^6 sin^2 θ ∑_k 2 K(ω-kn) [ cosθ( 2 X_k^-3,0 - e^ 2 ϖ X_k^-3,2 - e^- 2 ϖ X_k^-3,-2) + ( e^ 2 ϖ X_k^-3,-2 - e^- 2 ϖ X_k^-3,2) ] ( e^ 2 ϖ X_-k^-3,2 - e^- 2 ϖ X_-k^-3,-2) ⟨ - I_23/2sinθ sin 2w/r^3⟩_M,ϖ = R^5/32 i a^6 sin^2 θ∑_k 2 K(ω-kn) [ cosθ( (X_k^-3,2)^2 - (X_k^-3,-2)^2 ) + ( (X_k^-3,2)^2 +(X_k^-3,-2)^2 ) ] ⟨3 /r^3 ( - I_13 + I_23 ) ⟩_M,ϖ = 3 ^2 R^5/32 i a^6 sin^2 θ∑_k 2 K(ω-kn) [ ( (X_k^-3,2)^2 (1+ cosθ)^2 + (X_k^-3,-2)^2 (1- cosθ)^2 ) + 4 ( X_k^-3,0)^2 cos^2 θ] § LAPLACE VECTOR ⟨⟩ = ė + + × = n ( /) ( R/a)^5 ė = · = /e deec[1]= · = /ee^2/1-e^2 deec[2]= · () / e = /e1/e deec[3]/1-e^2 deec[1] = - 38 [ (1-3 ^2) ( + ) - 6(1-^2-2^2) ] + 32 [ (1-) - (1+) - 2 (1-^2-2^2) ] + 38 [ (1-)^2 + (1+)^2 + 2 (1-^2-2^2) ] - 132 [ 2 (1-3^2)^2 ()^2 k + 9 (1-^2)^2 ()^2 (1+ k/2) - 9 (1-^2)^2 ()^2 (1- k/2) - 6 (1-3^2) (1-^2-2^2) (1+k ) + 6 (1-3^2) (1-^2-2^2) (1-k ) + 9 ((1-^2)^2-8 (1-^2-^2)^2) k ] - 316 [ 4 (1-^2) ^2 ()^2 k + (1-^2) (1-)^2 ()^2 (2+k ) - (1-^2) (1+)^2 ()^2 (2-k ) + 4(1-) (1-^2-2 ^2) (1+k ) + 4(1+) (1-^2-2 ^2) (1-k ) - 2 ((1-^2)^2-8 (1-^2-^2) ^2) k ] - 364 [ 4 (1-^2)^2 ()^2 k + (1-)^4 ()^2 (2+k ) - (1+)^4 ()^2 (2-k ) + 4 (1-)^2 (1-^2-2 ^2) (1+k ) - 4 (1+)^2 (1-^2-2 ^2) (1-k ) + 2 ((1-^2)^2-8 (1-^2-^2) ^2) k ] deec[2] = - 332 [ 4 (1-3 ^2) ()^2 + 3 (1-^2) (()^2 + ()^2) - 4 (2-3 (^2+^2)) ( + ) + 6 (1-^2-4 ^2) ] + 316 [ 4 (1-2 ^2) ()^2 - (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 + 2 ((1-) (1--4 ^2)-2 (1-2 ) ^2) - 2 ((1+) (1+-4 ^2)-2 (1+2 ) ^2) + 4 (1-^2-4 ^2) ] - 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 - (1+)^3 ()^2 + 2 ((1-)^2 (1+2 )-2 (1-) ^2) - 2 ((1+)^2 (1-2 )-2 (1+) ^2) + 2 (1-^2-4 ^2) ] + 332 [ 3 (1-^2) (()^2 - ()^2) - 2 (5- 3 (^2+2 ^2)) ( - ) ] + 316 [ 4 ^2 ()^2 + (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 - 4 ((1-) (1-^2)-(1-2 ) ^2) - 4 ((1+) (1-^2)-(1+2 ) ^2) + 2 (3-3 ^2-4 ^2) ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 + (1+)^3 ()^2 + 2 ((1-)^2-2 (1-) ^2) - 2 ((1+)^2+2 (1+) ^2) - 2 (3-3^2-4 ^2) ] deec[3] = - 332 [ 2 (^2-^2-3 ^2 (^2-^2)) e^2 ()^2 + 32 ((1-^2)^2 (2+k (1-e^2)^3/2)-(^2 (1-^2)+(3-5 ^2-2 ^2) ^2) e^2) ()^2 + 32 ((1-^2)^2 (2-k (1-e^2)^3/2)-(^2 (1-^2)+(3-5 ^2-2 ^2) ^2) e^2) ()^2 - 2 ((1-^2 (4-3 ^2)-2 ^2 (1-3 ^2))+(^2-^2) (1-3 ^2) e^2) ( + ) - (1-^2 (4-3 ^2)-2 ^2 (1-3 ^2)+4 ^2 (1-^2-^2)) ( - ) k (1-e^2)^3/2 + 3 (2 (1-^2)^2-16 ^2 (1-^2-^2)+(^2 (5-6 ^2)-^2 (1-^2+3 ^2)) e^2) - 2 (1-^2-^2 (5-6 ^2-3 ^2)) ( + ) e + 3 ((1-3 ^2+2 ^4)-^2 (3-2 ^2-5 ^2)) ( + ) ( + ) e ] + 18 [ 2 (3 (1-^2+^2) e^2+(1-^2-2 ^2) k (1-e^2)^3/2) ()^2 - 32 ((1-)^2 (1-^2) (2+k (1-e^2)^3/2)-(1-2 ^2 (1-^2)-3 ^2-^2 (2-5 ^2-^2)) e^2) ()^2 - 32 ((1+)^2 (1-^2) (2-k (1-e^2)^3/2)-(1-2 ^2 (1-^2)+3 ^2-^2 (2-5 ^2-^2)) e^2) ()^2 + 2 (3 ^2 (1-^2-2 ^2)+3 ^2 (1+^2-^2) e^2-(2-2 ^2-3 ^2) k (1-e^2)^3/2) ( + ) - (6 (1-^2-2 ^2)-3 (2-2 ^2-5 ^2) e^2 -(1-4 ^2 (1-^2)+^2 (2-3 ^2-2 ^2)) k (1-e^2)^3/2) ( - ) + 3 (2 (1-^2)^2-16 ^2 (1-^2-^2)-(1-6 ^2 (1-^2)+^2 (3 ^2-^2)) e^2) + 6 ^2 (2-2 ^2-^2) ( + ) e - 3 (2 ^2 (1-^2)-(2-3 ^2)+^2 (2+2 -2 ^2-5 ^2)) ( + ) e - 3 (2 ^2 (1-^2)+(2-3 ^2)+^2 (2-2 -2 ^2-5 ^2)) ( + ) e ] - 132 [ 2 (2 (1-^2-2 ^2) k (1-e^2)^3/2-3 (2-^2-^2 (3-^2+^2)) e^2) ()^2 + 32 ((1-)^4 (2+k (1-e^2)^3/2)-(2-^2 (1+2 ^2)-6 (1-^2)+^2 (5-5 ^2-^2)) e^2) ()^2 + 32 ((1+)^4 (2-k (1-e^2)^3/2)-(2-^2 (1+2 ^2)+6 (1-^2)+^2 (5-5 ^2-^2)) e^2) ()^2 + 2 (3 (1-2 ^2-^2 (^2+2 ^2))-3 (2-^2 (3-^2)-^2 (1+^2)) e^2 -2 (1-3 ^2-2 ^2) k (1-e^2)^3/2) ( + ) - (12 (1-^2-2 ^2)-6 (3-2 ^2-5 ^2) e^2 -(3-2 ^2 (5-2 ^2)-^2 (4+2 ^2+3 ^2)) k (1-e^2)^3/2) ( - ) + 3 (2 (1-^2)^2-16 ^2 (1-^2-^2)-(2-9 ^2+6 ^4-^2 (3+^2-3 ^2)) e^2) - 6 (1+^2-^2 (3-2 ^2-^2)) ( + ) e - 3 (1-^2 (1+2 ^2)-2 (1-3 ^2)-^2 (1-4 +2 ^2+5 ^2)) ( + ) e - 3 (1-^2 (1+2 ^2)+2 (1-3 ^2)-^2 (1+4 +2 ^2+5 ^2)) ( + ) e ] deec[3] = - 116 [ 2 (1-3 ^2) ()^2 k (1-e^2)^3/2 - 92 (1-2 ^2-^2) e^2 (()^2 - ()^2) - 3 (2 (1-3 ^2)-(1-3 ^2) e^2) ( - ) - 6 (1-2 ^2-^2) ( + ) k (1-e^2)^3/2 + 18 (1-^2-2 ^2) k (1-e^2)^3/2 - 9 (1-2 ^2-^2) ( + ) ( - ) e ] + 18 [ 2 (3 e^2-2 k (1-e^2)^3/2) ()^2 - 32 (1+3 -4 ^2-2 ^2) e^2 ()^2 + 32 (1-3 -4 ^2-2 ^2) e^2 ()^2 - 2 (6 +3 e^2+(1-4 ^2-2 ^2) k (1-e^2)^3/2) ( + ) + 3 (4 ^2+(1-2 ^2) e^2-2 k (1-e^2)^3/2) ( - ) + 3 (4 (1-^2-2 ^2) k (1-e^2)^3/2-3 e^2) - 6 ( + ) e - 3 (1+3 -4 ^2-2 ^2) ( + ) e + 3 (1-3 -4 ^2-2 ^2) ( + ) e ] + 116 [ 2 ((1+^2) k (1-e^2)^3/2-3 e^2) ()^2 - 32 (1-3 +2 ^2+^2) e^2 ()^2 + 32 (1+3 +2 ^2+^2) e^2 ()^2 + 2 (6 +3 e^2-(1+2 ^2+^2) k (1-e^2)^3/2) ( + ) - 3 (2 (1+^2)+(1-^2) e^2-2 k (1-e^2)^3/2) ( - ) - 3 (2 (1-^2-2 ^2) k (1-e^2)^3/2-3 e^2) + 6 ( + ) e - 3 (1-3 +2 ^2+^2) ( + ) e + 3 (1+3 +2 ^2+^2) ( + ) e ] double average equations ⟨⟩_M, = T_1 + T_2 + T_3 × , with = T̅_p + T̅_q + T̅_s , 211029i = T̅_q/sinθ + T̅_2 + T̅_p/sinθ ×T̅_p = A_t TMwc[1] ; T̅_q = A_t TMwc[2] ; T̅_s = A_t TMwc[3]T̅_1 = T̂_q/sinθ , T̅_2 = ( T̂_s - cosθT̂_q/sinθ) , T̅_3 = T̂_p/sinθT̅_2 = (T̅_s - cosθT̅_q/sinθ) = A_t TMwc[5] TMwc[1]/ = - 332 [ 4 (1-3 ^2) ()^2 + 3 (1-^2) (()^2 + ()^2) ] - 316 [ 4 (1-2 ^2) ()^2 - (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 - (1+)^3 ()^2 ] TMwc[2]/ = - 932 [ (1-^2) (()^2 - ()^2) ] + 316 [ 4 ^3 ()^2 + (1-)^2 (2+) ()^2 - (1+)^2 (2-) ()^2 ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 - (1+)^3 ()^2 ] TMwc[3] = 316 [ 4 ^2 (1-^2) ()^2 + (1-^2) ( (1-)^2 ()^2 + (1+)^2 ()^2 ) ] + 332 [ 4 (1-^2)^2 ()^2 + (1-)^4 ()^2 + (1+)^4 ()^2 ] TMwc[5] = 932 [ (1-^2) (()^2 - ()^2) ] + 316 [ 4 ^2 ()^2 + (1-)^2 (1+2 ) ()^2 + (1+)^2 (1-2 ) ()^2 ] + 332 [ 4 (1-^2) ()^2 + (1-)^3 ()^2 + (1+)^3 ()^2 ] §.§ Orbital energy Ė = ·= - ∂ U/∂ t = A_t TMwc[4] TMwc[4]/n = 164 k [ 4 (1-3 ^2)^2 ()^2 + 9 (1-^2)^2 (()^2 + ()^2) ] + 316 k [ 4 ^2 (1-^2) ()^2 + (1-^2) ( (1-)^2 ()^2 + (1+)^2 ()^2 ) ] + 364 k [ 4 (1-^2)^2 ()^2 + (1-)^4 ()^2 + (1+)^4 ()^2 ] § ADDITIONAL CALCULATIONS ( r/a)^n e^ m = ∑_k X_k^n,m e^ k M = ∑_k X_-k^n,-m e^ k M ( r/a)^-3 e^ m = ∑_k X_k^-3,m e^ k M d/dt[ ( r/a)^-3 e^ m ] = n ∑_k k X_k^-3,m e^ k M d/dt[ ( r/a)^-3 e^ m ] = -3 ṙ/a( r/a)^-4 e^ m + m ( r/a)^-3 e^ m = ( -3 ṙ + m r ) / a ( r/a)^-4 e^ m ṙ = v_0 e sin , r = v_0 (1 + e cos) d/dt[ ( r/a)^-3 e^ m ] = ( 3 e sin + m + m e cos) v_0/a( r/a)^-4 e^ m d/dt[ ( r/a)^-3 e^ m ] = /2( 3 e ( e^ - e^-) + 2 m + m e ( e^ + e^-) ) v_0/a( r/a)^-4 e^ m d/dt[ ( r/a)^-3 e^ m ] = v_0/2 a( r/a)^-4[ e (m+3) e^ (m+1) + 2 m e^ m + e (m-3) e^ (m-1) ] e (m+3) X_k^-4,m+1 + 2m X_k^-4,m + e (m-3) X_k^-4,m-1 = 2 √(1-e^2) k X_k^-3,m m=0 ⇒ 3 e ( X_k^-4,1 - X_k^-4,-1) = 2 √(1-e^2) k X_k^-3,0 m=1 ⇒ 4 e X_k^-4,2 + 2 X_k^-4,1 - 2 e X_k^-4,0 = 2 √(1-e^2) k X_k^-3,1 m=-1 ⇒ 2 e X_k^-4,0 - 2 X_k^-4,-1 - 4 e X_k^-4,-2 = 2 √(1-e^2) k X_k^-3,-1 m=2 ⇒ 5 e X_k^-4,3 + 4 X_k^-4,2 - e X_k^-4,1 = 2 √(1-e^2) k X_k^-3,2 m=-2 ⇒ e X_k^-4,-1 -4 X_k^-4,-2 - 5 e X_k^-4,-3 = 2 √(1-e^2) k X_k^-3,-2 Other relations are taken from eq.(2) in Vakhidov (2001): e/2 X_k^-3,m+1 + X_k^-3,m + e/2 X_k^-3,m-1 = (1-e^2) X_k^-4,m m=0 ⇒ e/2 X_k^-3,1 + X_k^-3,0 + e/2 X_k^-3,-1 = (1-e^2) X_k^-4,0 m=1 ⇒ e/2 X_k^-3,2 + X_k^-3,1 + e/2 X_k^-3,0 = (1-e^2) X_k^-4,1 m=-1 ⇒ e/2 X_k^-3,0 + X_k^-3,-1 + e/2 X_k^-3,-2 = (1-e^2) X_k^-4,-1 m=2 ⇒ e/2 X_k^-3,3 + X_k^-3,2 + e/2 X_k^-3,1 = (1-e^2) X_k^-4,2 m=-2 ⇒ e/2 X_k^-3,-1 + X_k^-3,-2 + e/2 X_k^-3,-3 = (1-e^2) X_k^-4,-2 Finally, combining all expressions give: X_k^-4,3 = (e X_k^-4,1 - 4 X_k^-4,2 + 2 √(1-e^2) k X_k^-3,2) / (5 e) X_k^-4,-3 = (e X_k^-4,-1 - 4 X_k^-4,-2 - 2 √(1-e^2) k X_k^-3,-2) / (5 e) X_k^-3,3 = ( 2 (1-e^2) X_k^-4,2 - 2 X_k^-3,2 - e X_k^-3,1) / e X_k^-3,-3 = ( 2 (1-e^2) X_k^-4,-2 - 2 X_k^-3,-2 - e X_k^-3,-1) / e X_k^-4,2 = ( e X_k^-4,0 - X_k^-4,1 + √(1-e^2) k X_k^-3,1) / (2 e) X_k^-4,-2 = ( e X_k^-4,0 - X_k^-4,-1 - √(1-e^2) k X_k^-3,-1) / (2 e) X_k^-4,1 = ( e/2 X_k^-3,2 + X_k^-3,1 + e/2 X_k^-3,0) / (1-e^2) X_k^-4,-1 = ( e/2 X_k^-3,0 + X_k^-3,-1 + e/2 X_k^-3,-2) / (1-e^2) 1/2 ( X_k^-3,1 + X_k^-3,-1) = ( (1-e^2) X_k^-4,0 - X_k^-3,0) / e 1/2 ( X_k^-3,1 - X_k^-3,-1) = 1/3e (1-e^2)^3/2 k X_k^-3,0 - e/4 ( X_k^-3,2 - X_k^-3,-2) Another relations from eq.(3) in Vakhidov (2001) with n=-3: e/2 (2m+3) X_k^-3,m+1 + e/2 (2m-3) X_k^-3,m-1 + e^2/4 (m+3) X_k^-3,m+2 + e^2/4 (m-3) X_k^-3,m-2 = [ k (1-e^2)^3/2 - m (1+e^2/2) ] X_k^-3,m m=0 ⇒ 3e/2 X_k^-3,1 - 3e/2 X_k^-3,-1 + 3e^2/4 X_k^-3,2 - 3e^2/4 X_k^-3,-2 = k (1-e^2)^3/2 X_k^-3,0 (rendundant) m=1 ⇒ 5e/2 X_k^-3,2 - e/2 X_k^-3,0 + e^2 X_k^-3,3 - e^2/2 X_k^-3,-1 = [ k (1-e^2)^3/2 - (1+e^2/2) ] X_k^-3,1 (rendundant) m=-1 ⇒ e/2 X_k^-3,0 - 5e/2 X_k^-3,-2 + e^2/2 X_k^-3,1 - e^2 X_k^-3,-3 = [ k (1-e^2)^3/2 + (1+e^2/2) ] X_k^-3,-1 (rendundant) Same relation from eq.(3) in Vakhidov (2001) with n=-4: e/2 (2m+4) X_k^-4,m+1 + e/2 (2m-4) X_k^-4,m-1 + e^2/4 (m+4) X_k^-4,m+2 + e^2/4 (m-4) X_k^-4,m-2 = [ k (1-e^2)^3/2 - m (1+e^2/2) ] X_k^-4,m m=0 ⇒ 2 e ( X_k^-4,1 - X_k^-4,-1) + e^2 ( X_k^-4,2 - X_k^-4,-2) = k (1-e^2)^3/2 X_k^-4,0 (rendundant) m=1 ⇒ 3 e X_k^-4,2 - e X_k^-4,0 + 5 e^2/4 X_k^-4,3 - 3 e^2/4 X_k^-4,-1 = [ k (1-e^2)^3/2 - (1+e^2/2) ] X_k^-4,1 (rendundant) ⟨⟩_M = T_1 + T_2 + T_4 , 211130a with T_1 = [ f_4(e) /2ncosθ - f_2(e) ] , 211130b T_2 = [ f_1(e) - 1/2 f_4(e) ] /n , 211130c T_4 = (·) [ f_4(e) - f_1(e) ] /n , 211130d
http://arxiv.org/abs/2306.05711v1
20230609071148
Deformation-induced homogenization of the multi-phase senary high-entropy alloy MoNbTaTiVZr processed by high-pressure torsion
[ "Chuyi Duan", "Aleksander Kostka", "Xiaohu Li", "Zirong Peng", "Peter Kutlesa", "Reinhard Pippan", "Ewald Werner" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
affiliation_a]Chuyi Duancor1 [cor1]Corresponding author. [email protected] affiliation_b]Aleksander Kostka affiliation_c]Xiaohu Li affiliation_d]Zirong Peng affiliation_e]Peter Kutlesa affiliation_e]Reinhard Pippan affiliation_a]Ewald Werner [affiliation_a]organization=Institute of Materials Science, Technical University of Munich, addressline=Boltzmannstr. 15, city=Garching, postcode=85748, country=Germany [affiliation_b]organization=Center for Interface-Dominated High Performance Materials (ZGH), addressline=Universitätsstr. 150, city=Bochum, postcode=44780, country=Germany [affiliation_c]organization=Heinz Maier-Leibnitz Zentrum (MLZ), Technical University of Munich, addressline=Lichtenbergstr. 1, city=Garching, postcode=85748, country=Germany [affiliation_d]organization=Carl Zeiss Microscopy GmbH, addressline=Kistlerhofstr. 75, city=München, postcode=81379, country=Germany [affiliation_e]organization=Erich Schmid Institute of Materials Science, Austrian Academy of Sciences, addressline=Jahnstr. 12, city=Leoben, postcode=8700, country=Austria Dendritic microstructures are frequently observed in as-solidified refractory high-entropy alloys (RHEAs), and their homogenization typically requires a long-term heat treatment at extremely high temperatures. High-pressure torsion (HPT) has been shown to be capable of mixing immiscible systems at room temperature, and therefore represents a promising technique for homogenizing dendritic RHEAs. In this work, the as-solidified RHEA MoNbTaTiVZr was processed up to 40 revolutions by HPT. It was found that the dendritic microstructure was eliminated, resulting in a chemical homogeneity at a von Mises equivalent shear strain of about 400. The study of deformation mechanism showed an initial strain localization, followed by a co-deformation of the dendritic and interdendritic regions. In the co-deformation step, the Zr-rich interdendritic region gradually disappeared. The deformation-induced mixing also led to the formation of an ultra-fine grained (UFG) microstructure, exhibiting a grain size of approximately 50 nm. The microhardness increased from 500 HV in the as-solidified to 675 HV in the homogenized UFG state. The underlying mechanisms responsible for the microhardness enhancement, such as grain refinement and solid solution strengthening, were also discussed. High-entropy alloy Severe plastic deformation Chemical homogenization Published in Mater. Sci. Eng. A 871 (2023) 144923 DOI: <https://doi.org/10.1016/j.msea.2023.144923> § INTRODUCTION High entropy alloys (HEAs) which consist of multiple principal elements have attracted tremendous research attention due to their superior mechanical properties <cit.>. The high configurational entropy in HEAs makes it thermodynamically preferable to form single-phase alloys <cit.>. However, with the addition of alloying elements, not only the configurational entropy is increased, but also the interactions between elements in the alloy become more complex, which may lead to the formation of additional phases <cit.>. The rapidly developing aerospace industry requires increasingly high performance of materials at high temperatures, and these requirements are already difficult to meet with conventional Ni-based superalloys <cit.>. Refractory HEAs (RHEAs) are gradually coming into attention as high temperature materials, with their main elements being predominantly refractory elements. The complexity of the elemental composition in RHEAs frequently leads to chemical segregation during solidification from the liquid-state <cit.>. This is often due to the large differences in the melting points of the different elements and their respective effects on liquid phase stabilization <cit.>. As a result, as-solidified RHEAs are often dendritic. There are different opinions on the phase composition of RHEAs containing dendrites. On the one hand, because only one phase peak is often seen in XRD, dendritic RHEAs are often considered to be single-phase as well, such as the original RHEA MoNbTaW proposed by Senkov <cit.>. On the other hand, Zhang et al. suggest that this chemical segregation is actually due to the existence of two phases <cit.>. High-pressure torsion (HPT) is a well-established technique of severe plastic deformation (SPD). It has been widely employed to study the strain hardening and grain refinement behavior of metallic materials, including single-phase HEAs <cit.>. Furthermore, dual-phase HEAs have also been investigated extensively due to their unique mechanical and physical properties. For example, some dual-phase HEAs consist of both body-centered cubic (BCC) and face-centered cubic (FCC) phases <cit.>, while others comprise a soft solid solution matrix and hard intermetallic phases <cit.>. In addition, the use of HPT has been extended for the fabrication of HEAs by mechanical alloying pure elemental powders <cit.>. Moreover, HPT has recently been used to study the forced formation of supersaturated solid solutions in immiscible systems, which has attracted considerable attention due to the potential for discovering new materials with improved properties <cit.>. The possibility of the forced formation of solid solution in immiscible systems using HPT brings up an important inquiry regarding the effectiveness of HPT-processing in eliminating the elemental segregation and achieving chemical homogeneity in RHEAs. Hence, a thorough investigation is necessary to shed light on this topic and advance our understanding of the properties and behavior of these materials. This work focuses on the elemental mixing mechanism during high-pressure torsion of a multi-phase, dendritic as-solidified, senary refractory high entropy alloy of MoNbTaTiVZr. The evolution of the microstructure, hardness and composition during the HPT enables to identify mechanisms governing the elemental mixing. § EXPERIMENTAL The equimolar RHEA MoNbTaTiVZr was prepared by arc melting of pure elements (purity higher than 99.5 wt.%) on a water-cooled copper hearth in an arc-melter (MAM-1, Edmund Bühler). The ingot was flipped and remelted at least 10 times to ensure chemical homogeneity. Before melting, the chamber of the arc-melter was evacuated and backfilled with high purity Argon (5N) for 5 times. A Ti-getter was used to further remove the oxygen. Disk-shaped samples with a diameter of 6 mm and a thickness of 0.8 mm were sliced from the ingot using electric discharge machining and a wire diamond saw. More details of the sample preparation can be found in <cit.>. The disks were subjected to HPT-processing using conventional Bridgman-anvils with a cavity height of 0.6 mm, under a quasi-hydrostatic stress of 7 GPa, for varied numbers of revolution N: 0.25, 2, 4, 10, 20, 30 and 40. The samples after HPT-processing will be denoted as N0.25, N2, N4, N10, N20, N30 and N40, respectively. The rate of processing was chosen as 0.2 revolutions per minute to avoid excessive generation of heat. A disk which was compressed without introducing shear strains, denoted as N0, was used for comparison. The HPT-disks were divided in half, and the surface defined by the axial and radial directions was examined using microscopic characterization, as shown in Fig. <ref>. Scanning electron microscopy (SEM) was conducted in the microscope JEOL JSM-6490 equipped with an energy-dispersive X-ray spectrometer (EDS) operated at 20 kV. The HPT-disks as well as as-solidified specimens were embedded, mechanically ground and finally polished with OPS (oxide polishing suspension, 0.25 µm). Transmission electron microscopy (TEM) was carried out in an aberration corrected microscope JEOL JEM-ARM200F operated at 200 kV. The TEM specimens were prepared using the focused ion beam (FIB) method in an FEI Helios G4 CX operated at 30 kV. Lamella specimens with a dimension of about 8×8×1 µm were extracted from the radial direction of HPT-disks using Ga^+ ion milling. A carbon protective deposition of about 3 µm was sputtered prior to thinning process. During the final step of sample thinning, the accelerating voltage was reduced to 8 kV in order to minimize the beam damage. Synchrotron high-energy X-ray diffraction (HEXRD) as well as laboratory XRD analysis were used to examine the phase composition. HEXRD was performed in transmission mode with a wavelength λ of 0.14235 Å (photon energy about 87 keV) and a wavelength of 0.124 Å (photon energy about 100 keV) at the High Energy Materials Science (HEMS) beamline at PETRA III at DESY in Hamburg, Germany. The beam size was set as 0.7×0.7 mm by adjusting the slit aperture. A PerkinElmer XRD 1622 flat panel detector was placed at a distance to sample of about 1400 mm to record the diffracted beam. Standard LaB_6 powder was used to calibrate the results. The recorded 2D diffraction patterns were integrated in the software FIT2D <cit.>. Rietveld refinement was conducted in the software MAUD <cit.>. Laboratory XRD was performed in a Brucker D8 diffractometer using Cu as anode material (wavelength λ 1.5406 Å) with a beam size of 0.5×0.5 mm. Due to the different wavelength of XRD applied in this paper, diffraction intensity was plotted against the wave vector Q=4πsinθ/λ instead of the diffraction angle θ for easier comparison. Vickers microhardness measurements were carried out using a Reichert-Jung micro-DUROMAT 4000E with a load of 100 g and a dwell time of 10 s. The results were averaged from five measurements along the axial (thickness) direction taken from the polished surface defined by the axial and radial directions. § RESULTS §.§ Initial microstructure prior to HPT-processing Fig. <ref> shows the XRD pattern and SEM micrograph of the dendritic microstructure of the as-solidified equimolar RHEA MoNbTaTiVZr. Two groups of BCC peaks with a pronounced asymmetry to the peaks of the major BCC phase are also visible. According to our previous study <cit.>, the RHEA has a grain size of approximately 100 µm and consists of dendritic regions depleted of Zr and interdendritic regions enriched in Zr. The SEM micrograph at large magnification (<ref>a in the appendix) shows the existence of V-rich inclusions with a size of about 500 nm to 1 µm distributed in the interdendritic region. An additional investigation on the influence of the alloying element V was conducted. By analyzing their diffraction peaks obtained from HEXRD (Fig. <ref>b in the appendix), the peaks of the minor phases left to the major BCC phase in the diffractogram are proved to belong to V-rich inclusions. Due to the low peak intensity of the minor phase, the diffraction intensity of RHEA was plotted logarithmically (Fig <ref>a) to allow an examination of the minor phase. Note that the synchrotron X-ray beam contains a second wavelength of about 10% total intensity. The resulting peaks are marked with "o" in the diffractogram. The plot shows that the V-rich inclusions also have a BCC crystal structure. In addition, an asymmetry on the left side of the major BCC phase peak was observed. The asymmetry is more obvious at large wave vectors Q (indicated by an arrow), which may suggest the existence of an additional group of peaks. By Rietveld refinement (Fig. <ref>, appendix), the additional group of peaks could significantly reduce the difference between the calculated profile and the experimental result. The additional peaks are assigned to the interdendrtic region and are also indexed as BCC phase as the asymmetry lies always between the peaks of the major and the minor BCC phases. It is assumed that the peaks of the dendritic and interdendritic regions are so close to each other that they cannot be distinguished in the diffractogram. Despite the influence of the V-rich inclusions distributed in the interdendritic region, the SEM-EDS result in Tab. <ref> can still provide a rough estimate of the chemical composition. Note that results of the inderdendritic region are obtained from positions containing as few V-rich inclusions as possible. §.§ Characterization of the HPT-processed RHEA §.§.§ Microhardness evolution The von Mises equivalent shear strain at a position r of HPT-disks can be stated as ε = 2π N r/√(3)t, 0≤ r ≤ R, where N is the number of revolutions, R is the radius of the disk and t is its thickness after HPT <cit.>. Due to the uncertainties of the sample dimension, e.g. inexact r after grinding and polishing in metallographic preparation and small variations of t along the radius, the strain value will be shown as an approximate value along with "∼" in this study. Fig. <ref> shows the microhardness evolution with increasing applied strain. Note that the height of the cavity inside the HPT-anvils is 0.6 mm, while the initial thickness of the samples is 0.8 mm. For comparison, the HPT-disk N0 which has only undergone compression and was not twisted in HPT-processing, is shown at ε = 0. Clearly, there are two saturation stages for the increase of microhardness. When the strain is small, the hardness increases rapidly to about 525 HV. This might be due to the deformation by the compression in HPT to allow for the required shear strain. Subsequent shearing leads to a gradual increase to the first saturation of about 550 HV at a strain of around 70. Thereafter, the hardness shows a drastic increase to about 625 HV at a strain of about 100, followed by a slow increase and a tendency to a saturation at about 675 HV at a strain of ε∼ 400. The first saturation stage is more clear when plotted over the logarithm of the applied strain as shown in Fig. <ref> in the appendix. §.§.§ Microstructural evolution The secondary electron (SE) images showing the microstructure at varied strains are presented in Fig. <ref>. In our configuration, the SE detector in the SEM also receives signals from back scattered electrons, and thus the Z-contrast is shown as well. With increasing shear strain, a slightly deformed dendritic microstructure (Fig. <ref>a) evolves to a lamellar microstructure containing elongated dendritic and interdendritic regions (Fig. <ref>b) at ε∼ 18. Such a lamellar microstructure fragments and becomes thinner with further increasing strain. The thickness of the lamella reduced from ∼10 µm at the strain of ε∼ 18 to about 1 µm at the strain of ε∼ 92, leading to the formation of fragmented vortex microstructure, as is shown in Figs. <ref>b-d. The vortex microstructure evolves to a continuous stripe-like microstructure when the strain is further increased to 192 (Fig. <ref>e). A homogeneous microstructure is observed at a strain of ε∼ 396 (Fig. <ref>f). Fig. <ref>g shows an overview of the evolution from the fragmented vortex to the stripe-like microstructure at ε∼ 120 to 200. Note that the micrographs of Figs. <ref>a-f are taken at r = 1.5 mm for N0.25, N2, N4, N10, N20 and N40, respectively. Fig. <ref>g is combined from 5 micrographs along the radius of specimen N20 at about r= 1 - 1.5 mm. The microhardness values corresponding to the characteristic microstructure of Figs. <ref>a-f are listed in Tab. <ref>. It is reasonable to conclude that the microhardness increases by 50 HV, while the microstructure changes from dendritic to elongated lamella. The microhardness remains almost the same during the elongation process of the dendritic microstructure until it begins to fragment. The fragmentation leads to an increase in hardness, and the hardness of the stripe-like microstructure at ε∼ 192 is about 3% higher compared to that at ε∼ 92. When the microstructure becomes chemically homogeneous, microhardness saturates. In order to check the phase evolution during HPT-processing, the X-ray diffractograms measured at different applied strain ε are depicted in Fig. <ref>a. The ε values in the figure are calculated with the position of the beam center taken as r in Eq. <ref>. The diffractogram for ε= 0 refers to the result obtained from specimen N0, which is merely compressed, but not twisted in the processing. It can be seen that the major BCC peaks are present throughout the HPT-processing, while the minor peaks representing V-rich inclusions gradually disappear. In addition, it can be found that the major BCC peaks broaden at the beginning of shearing when ε increases from 0 to 2. The broadening is not obvious from ε∼ 2 to ε∼ 36. However, a severe broadening of the peak of the V-inlusions is observed in this strain range, which indicates the large deformation of the V-rich inclusions, as is proved by the micrographs taken at high magnification, see Figs. <ref>b-d. The peak of the V-rich inclusions almost disappears after ε increases to 72, indicating a dissolution of V-rich inclusions. A significant peak broadening of the major BCC peaks and the disappearance of V-rich inclusions can be observed when ε increases from 72 to 120. Thereafter, a further pronounced broadening is not observed up to ε∼ 650. Note that the XRD patterns from ε∼ 120 onward were measured using CuKα radiation due to limited synchrotron X-ray beam time. Despite of that we found in a calibration experiment that the effect of instrumental broadening was negligible compared to that introduced by HPT-processing. The inset of Fig. <ref>a shows the shift of the (110)-peak of the major phase. A slight peak shift to larger wave vectors Q can be found up to ε∼ 18, and then to smaller Q up to ε∼ 650. The peak shift suggests a decrease of the lattice parameter up to ε∼ 18 and a subsequent increase up to ε∼ 396. Interestingly, the (110)-peak shift is observed after the applied strain ε reaches 120 and 192, at which the V-rich inclusions dissolve and only fragmented dendritic and interdendritic regions exist. Results of TEM investigations at ε∼ 2, 18, 36, 92 and 396 are depicted in Fig. <ref>. The FIB lift-outs were extracted at r=1.5 mm of the specimens N0.25, N2, N4, N10 and N40, respectively. All bright field (BF) TEM micrographs shown in the Fig. <ref> reveal very strong contrast related with the high density of defects like dislocations and grain boundaries. The pronounced dark contrast in Fig. <ref>a_1 is additionally enhanced by the strong Bragg crystallographic orientation contrast. The corresponding selected area electron diffraction (SAED) (Fig. <ref>a_2) shows a typical BCC spot pattern. The scanning transmission electron microscopy (STEM) BF image (Fig. <ref>a_4) and the angular-dark-field (ADF) image (Fig. <ref>a_5) clearly show the dendritic/interdendritic regions. After strain increases to ε∼ 18, the diffraction spots become elongated and diffused (Fig. <ref>b_2) which is driven by the accumulated distortion of the crystal lattice. This effect is even more pronounced for ε∼ 36 (Fig. <ref>c_1). The STEM BF and ADF micrographs for ε∼ 18 (Figs. <ref>b_4 and b_5) show the presence of V-rich inclusions. Only after applying higher strains the V-rich inclusions dissolve (Fig. <ref>c_4, c_5). The BF TEM micrograph for ε∼ 92 (Fig. <ref>d_1) reveal the presence of an ultra-fine grained (UFG) microstructure. The corresponding SAED pattern (Fig. <ref>d_2) exhibits rings (identical selected area aperture is used in acquisition of all diffraction patterns). At this applied strain, the coarse dendritic and interdendritic regions are not distinguishable. Few remaining Zr-rich particles with a size of about 190 nm still can be found (Fig. <ref>d_5). A further increase of the HPT strain to ε∼ 396 results in further structure refinement, as shown in Fig. <ref>e_1 and e_4. However, the identical SAED patterns at ε∼ 92 and ∼396 (<ref>d_2 and e_2) indicate such a refinement is not intensive. Please note that in Fig. <ref>d_1 and e_1, the scale bar is the same, as is the case for Fig. <ref>d_4 and e_4. An enlarged view of Fig. <ref>e_1 indicates that the size of the grains is about 50 nm. Similar to ε∼ 92, few small Zr-rich particles can be found undissolved, yet with a smaller size of about 45 nm. § DISCUSSION Fig. <ref> provides a summary of the evolution of the previously described microstructure and the corresponding microhardness. In the paragraphs that follow, we will discuss the deformation mechanisms and explain the reasons for the enhanced hardness. §.§ Deformation mechanism §.§.§ Stage I (ε∼ 0 to ∼ 70) In XRD peak broadening, the effect of strain can be mathematically separated from that of crystallite/diffracting domain size, assuming that the broadening due to strain is Gaussian and the effect of size is Lorentzian <cit.>. By Rietveld refinement of the synchrotron XRD results up to ε∼ 72, we obtain the evolution of the root mean square (RMS) microstrain and lattice parameter during the first deformation stage, as shown in Figs. <ref>a and b, respectively. It is obvious that the RMS strains in the interdendritic region (blue) and in the V-rich inclusions (red) are much larger than that in the dendritic region (black), indicating a strain localization in the interdendritic region. Such a strain localization might be due to the large deformation of V-rich inclusions. Since in our previous study <cit.> no pronounced enrichment of vanadium is found in the dendritic region, it can be assumed that the V-rich inclusions are mainly dissolved in the interdendritic region. This assumption could be confirmed by the lattice parameter of the interdendritic region in Fig. <ref>b. Since V has a smaller atomic radius than the other elements in the RHEA, the lattice parameter of the interdendritic region should become smaller when V is dissolved in its lattice. It is also noted that the lattice parameter of the V-rich inclusions shows an increase between ε∼ 36 and ε∼ 72. Since multiple phases are present in the RHEA and since HPT is a mechanical process that introduces high shear strains, the change in lattice constants (increase in the V-rich inclusions and decrease in the interdendritic region) might suggest the presence of intergranular stresses between different phases. Nevertheless, the relationship between the formation of residual stress and HPT in the RHEA has not been studied in depth, so no conclusive inference can be made about such phenomenon. The deformation mechanism of multi-phase materials in HPT-processing is usually much more complex than that of single-phase materials. In the study of binary immiscible composites processed by HPT, the strain localization is a consequence of the hardness difference between the phases <cit.>. This is due to the fact that the soft phase is more prone to deformation compared to the hard phase in HPT-processing. In our previous study <cit.>, a strain softening behavior was observed by probing the average ultra-microhardness of the interdendritic region. As shown in Figs. <ref>b-d, the deformed V-inclusions are distributed in the interdendritic region. It is reasonable to conclude that the V-inclusions are softer and lead to a strain localization in the interdendritic region. This strain localization and the dissolution of the V-rich inclusion could explain the moderate hardness increase during deformation stage I. A similar phenomenon was observed in the study of the supersaturated solid solution of the Cu-Cr system processed by HPT <cit.>. Bachmaier et al. attributed this to the refinement and saturation of hardness within the respective phases, analogous to the behavior of pure metals <cit.>. This is consistent with the observation in our work, as we did not find significant grain refinement in the strain range from ε∼ 18 to 36 (Figs. <ref>b and c). §.§.§ Stage II (ε∼ 72 to ∼ 300) After the dissolution of the V-rich inclusions, the main deformation mechanism is the co-deformation of the dendritic and interdendritic regions. In deformation stage II, fragmentation as well as the recovery of the phases compete with each other to finally reach an equilibrium <cit.>. As shown in Fig. <ref>d, the elongated dendritic region is fragmented at ε∼ 92. The (110)-peak is still asymmetric (Fig. <ref>a), which disappears after the microstructure is fully homogenized, e.g. at ε∼ 396 (Fig. <ref>b). Again, this demonstrates that the interdendritic phase exists. The peaks of the two phases are so close to each other that the errors in the results obtained from Rietveld refinement are too large to be reliable. Although the fragmented phases are already too small at this deformation stage to be measured accurately by EDS in the SEM, we can still roughly estimate the chemical composition by line-scans as shown in Fig. <ref>a. Apparently, there is a significant segregation of Zr (brown) and Ta (blue) compared to the theoretical value of 16.67% for the senary HEA. The segregation of Zr is more pronounced than that of Ta. It can be seen that the Zr-concentration exceeds 30 at.% in some Zr-rich areas. Note that the observed segregation should be less than the actual one due to the large interaction volume of SEM-EDS. Up to ε∼ 92, it can be assumed that the shear-induced mechanical mixing between the dendritic and the interdendritic regions might be negligible. At ε∼ 192, the width of the recombined stripe is larger than 5 µm. Hence, the quantitative EDS-analysis on the microstructure is more accurate (Fig. <ref>b). We find that the elemental segregation is already very small at this applied strain, indicating that the mixing of elements is much faster during shearing from ε∼ 100-200 compared to ε∼ 0-100. Various mechanisms have been proposed to explain the deformation-induced mechanical mixing in immiscible system. Bellon and Averback suggested a kinetic roughening of interfaces leading to the forced formation of a solid solution <cit.>. In addition, Yavari et al. proposed the Gibbs-Thomson effect of the fragmented phases after reaching a critical size (below 1 nm), they could then be dissolved into the other phase due to capillary pressure <cit.>. More recently, a dislocations shuffling mechanism was proposed, where dislocations glide across the heterophase interfaces and carry atoms from one phase to the other <cit.>. It was also suggested that one phase would eventually dissolve into the other by a process comparable to erosion or abrasion <cit.>. For the mechanical mixing of the dendritic and interdendritic regions of the HEA in the present study, it is more resonable to conjecture that the erosion and abrasion process proposed in <cit.> could explain this phenomenon. After all, we do observe fragmented Zr-rich particles in the TEM analysis. §.§ On the "single-phase" dendritic RHEA and its homogenization The determination of the single-phase composition in an HEA is usually based on the assumption that peaks of only one phase are found in the XRD pattern. Hence, although some HEAs showed deviations from chemical homogeneity, such as segregation or dendrites, they were still considered to be single-phase <cit.>. A recent study has shown that the "single-phase" HEAs containing elemental segregation are actually composed of two phases with the same crystal structure and very close lattice constants <cit.>. The results obtained in this work clearly support the latter opinion. HEAs are usually composed of elements with similar physical properties such as atomic radii and electronic concentration. When multiple elements are grouped together in an HEA, the interactions between them are complex, such as a positive enthalpy of mixing that leads to a miscibility gap. It is conceivable that they have similar lattice constants after phase decomposition such that they cannot be distinguished from each other in XRD. The common dendritic microstructure in RHEAs was reported to have a high thermal stability <cit.>. It is assumed that the solution annealing temperature of RHEAs is extremely high and the homogenization heat treatment could be impractical and uneconomical. In particular, RHEAs containing both Ta and Zr are susceptible to segregation and are forming dendrites during solidification <cit.>. Zyka et al. attributed this to the entropy effect from their studies of ternary or quaternary RHEAs <cit.>. In the present study, dendrites are also present with six equimolar elements. The interdendritic region is likely to be rich in Zr even if there exists an element with lower melting point such as Ti <cit.>. This is likely due to the fact that the liquidus temperature of Ta_0.5Zr_0.5 is 800 ^∘C lower than that of Ta_0.5Ti_0.5, and the liquid phase would be stabilized by Zr during solidification <cit.>. Whitfield et al. found the dendritic microstructure in the ternary Ta-Ti-Zr system was still observable after annealing at 1200 ^∘C for 100 hours followed by water quenching, and the dendrites could be only be eliminated by an additional heat treatment at 1000 ^∘C for 1000 hours again followed by water quenching <cit.>. For the RHEA in the present study, an annealing of the as-solidified sample at 1400 ^∘C for 12 hours followed by cooling in the furnace did not result in chemical homogeneity. A subsequent annealing at 1000 ^∘C for 12 hours followed by quenching did not homogenize the RHEA either (see Fig. <ref> in the appendix). However, the results of our study show that the homogenization of RHEA can be achieved by mechanical processing. Moreover, while achieving chemical homogeneity, the grains are also refined and enhanced mechanical properties are obtained. §.§ The enhanced microhardness The yield strength σ_y of a metallic material can be expressed as the sum of the intrinsic lattice friction stress (σ_fr) and contributions of different strengthening mechanism <cit.>: σ_y = σ_fr + Δσ_ρi + Δσ_ss + Δσ_ppt + Δσ_gb, where the terms on the right are the contributions of the initial dislocation density Δσ_ρi, solid solution hardening Δσ_ss, precipitation hardening Δσ_ppt and grain boundaries Δσ_gb. Obviously, in the present study, there is no contribution of precipitation hardening Δσ_ppt. The first increase in hardness, up to ε∼ 10, is due to the increase of the dislocation density Δσ_ρi (Fig. 5a_1) and the grain refinement within the dendritic and the interdendritic regions Δσ_gb. The Peierls stress in BCC metals is much higher compared to that in FCC metals. A recent study has shown that the compositional randomness in BCC high entropy alloys could additionally increase the Peiers stress, resulting in an extremely high intrinsic strength σ_fr <cit.>. This might explain why the increase of hardness in stage I is not significant compared to the initial hardness. The dramatic increase in hardness at about ε∼ 92 (Fig. 6) should be attributed to the grain refinement by an intense fragmentation. As suggested by Bachmaier et al. <cit.>, the decrease of the mean inter-particle distance (referred to as "phase spacing" in their study) in the Cu-Cr system could further refine the grains that are already saturated. Since we did not find intense grain refinement from ε∼ 92 to ε∼ 396 from the TEM results, the subsequent increase in hardness can only be explained by the additional solid solution hardening Δσ_ss induced by the dissolution of the large atom Zr. Interestingly, the increase in hardness due to grain refinement (∼ 75 HV) in stage II is not much different from that due to solid solution hardening (∼ 50 HV). Severe lattice distortion is considered to be one of the core effects in HEAs <cit.>. Among the various elements that form HEA, zirconium, which possesses large atomic radii, has been suggested to particularly cause local chemical fluctuations and significant lattice distortion <cit.>. Although the results in the present investigation also support this opinion, the fourteen RHEAs studied in <cit.> were all dendritic. There still exists some doubt if better mechanical properties can be achieved when the microstructure is homogenized. § CONCLUSIONS The dendritic senary RHEA MoNbTaTiVZr was processed by HPT up to 40 revolutions. The microstructural evolution was comprehensively studied by SEM, XRD and TEM. The correlation between enhanced hardness and microstructure, as well as the deformation and mixing mechanisms were discussed. In summary, the following conclusions can be drawn: * The dendritic RHEA is found to achieve chemical homogenization with simultaneously enhanced hardness by HPT-processing. * The deformation and mixing mechanism can be analogous to the mechanical mixing in binary immiscible systems. * The hardness enhancement at low strains (up to ε∼ 10) is mainly due to the increase in dislocation density and grain refinement within the respective phase. This increase is unusually insignificant (50 HV) compared to the initial hardness (500 HV). The reason might be the high Peierls stress in BCC HEA. * At intermediate strains (ε∼ 100), the dramatic hardness increase (75 HV) is attributed to further grain refinement caused by fragmentation. An UFG microstructure is formed. * An UFG microstructure is present at ε∼ 400, where the Zr-depleted dendritic and the Zr-rich interdendritic regions are mixed homogeneously. The hardness increase (50 HV) could be explained by the solid solution strengthening resulting from the dissolution of the large Zr atom. § ACKNOWLEDGEMENT The authors express their gratitude to DESY (Hamburg, Germany) for providing support and granting access to HEXRD facilities. Part of the experiments was conducted at PETRA III. The authors would also like to acknowledge the Materials Science Lab, which is operated by FRM II and Helmholtz-Zentrum Hereon at the Heinz Maier-Leibnitz Zentrum (MLZ), Garching, Germany, for preparing the samples used in this study. CD is grateful to Prof. Jan Torgersen for the financial support and insightful discussions during the writing of this work. CD thanks Mr. Patrick Hegele and Dr. Zhonghua Wang for their help with the XRD and SEM experiments. elsarticle-num § INFLUENCE OF V ON THE MICROSTRUCTURE OF MONBTATIV_XZR § ASYMMETRY OF THE (110)-PEAK OF THE MAJOR PHASE AS OBTAINED BY RIETVELD REFINEMENT § PLOT OF MICROHARDNESS AGAINST THE LOGARITHM OF THE APPLIED STRAIN § LIGHT OPTICAL MICROSCOPY IMAGE OF THE HEAT-TREATED RHEA
http://arxiv.org/abs/2306.08916v1
20230615074036
Counterfactuals Modulo Temporal Logics
[ "Bernd Finkbeiner", "Julian Siber" ]
cs.LO
[ "cs.LO", "cs.AI" ]
Higher Chern-Simons-Antoniadis-Savvidy forms based on crossed modules Jie Yang ====================================================================== Lewis' theory of counterfactuals is the foundation of many contemporary notions of causality. In this paper, we extend this theory in the temporal direction to enable symbolic counterfactual reasoning on infinite sequences, such as counterexamples found by a model checker and trajectories produced by a reinforcement learning agent. In particular, our extension considers a more relaxed notion of similarity between worlds and proposes two additional counterfactual operators that close a semantic gap between the previous two in this more general setting. Further, we consider versions of counterfactuals that minimize the distance to the witnessing counterfactual worlds, a common requirement in causal analysis. To automate counterfactual reasoning in the temporal domain, we introduce a logic that combines temporal and counterfactual operators, and outline decision procedures for the satisfiability and trace-checking problems of this logic. § INTRODUCTION Evaluating counterfactual statements is a fundamental problem for many approaches to causal reasoning <cit.>. Such reasoning can for instance be used to explain erroneous system behavior with a counterfactual statement such as `If the input i at the first position of the observed computation π had not been enabled then the system would not have reached an error e.' which can be formalized using the counterfactual operator and the temporal operator : π (¬𝑖) (¬𝑒) . Since the foundational work by Lewis <cit.> on the formal semantics of counterfactual conditionals, many applications for counterfactuals <cit.> and some theoretical results on the decidability of the original theory <cit.> and related notions <cit.> have been discovered. Still, certain domains have proven elusive for a long time, for instance, theories involving higher-order reasoning and an infinite number of variables. In this paper, we consider a domain that combines both of these aspects: temporal reasoning over infinite sequences. In particular, we consider counterfactual conditionals that relate two properties expressed in temporal logics, such as the temporal property ¬𝑒 from the introductory example. Temporal logics are used ubiquitously as high-level specifications for verification <cit.> and synthesis <cit.>, and recently have also found use in specifying reinforcement learning tasks <cit.>. Our work lifts the language of counterfactual reasoning to similar high-level expressions. We consider Quantified Propositional Temporal Logic () because it can characterize the full class of ω-regular properties and in this way subsumes popular specification languages. This results in our logic , which mixes with counterfactual conditionals and can be used to check counterfactual dependencies between ω-regular properties. We abstract away from any concrete causal models in this paper but refer to recent works by Halpern and Peters <cit.>, and Coenen et al. <cit.> on extending these models to infinitely many variables. Focusing on the core counterfactual reasoning inherent to causality allows us to study several key problems arising in the temporal domain. We believe the main reason higher-order reasoning and an infinite number of variables induce difficulties for counterfactual reasoning is ultimately tied to Lewis' rejection of the Limit Assumption, which stipulates that for any world W and property φ, there is a unique set of worlds minimally close to W that satisfy φ. If the assumption holds in some domain, counterfactual reasoning is simple: A statement such as `If φ had held then ψ would have held, too.', formally expressed by the formula φψ, then would only need to compute the set of worlds minimally close to W that satisfy φ and check whether all of them satisfy ψ, too. However, as Lewis points out, the assumption is generally not true in any continuous domain, and as we will see later, it is also not generally true in the temporal domain. Previous works on defining notions of causality in settings with infinitely many variables sidestep the issue by restricting to similarity relations <cit.> or logics for cause and effect <cit.> that satisfy the Limit Assumption, but this imposes a significant toll on the precision of the inferred causes, since this requires coarsely overapproximating the set of closest traces. One of the key insights of this work is that it is possible to reason about counterfactuals in the temporal domain even when rejecting the Limit Assumption. Without the assumption, evaluating counterfactual conditionals requires complex quantification over the possibly infinite chains of worlds ever closer to W. To solve these quantified statements in our domain, we use recent advances in the study of hyperproperties <cit.> and their corresponding temporal logics <cit.>, which originate in the verification of information-flow policies and allow to relate multiple traces of a system to another. This pushes their decidability to the edge, for instance, the satisfiability problem of , which extends with quantification over traces, is undecidable. While our counterfactual-temporal logic has inherently relational semantics, its models are still traces, in contrast to the sets of traces modeled by . Further, in the trace-quantification is guarded by the counterfactual conditionals. We show that together this yields decidability for the satisfiability, model-checking and trace-checking problems of by encoding them into the decidable model-checking problem of . We address several limitations of Lewis' original theory of counterfactuals, with the goal that our logic can be practically used to specify notions of temporal causality. For instance, the original theory requires a similarity relation between worlds that is total, i.e., for any two worlds W_1 and W_2 it needs to be possible to assess which one is closer to the reference world W. This turns out to be far too restrictive for reasonable similarity relations between infinite traces, and hence we extend Lewis' theory to non-total similarity relations. However, since in such relations, there may be several incomparable sets of worlds that may count as the worlds minimally close to W, this opens a semantic gap between Lewis' two proposed counterfactual operators. The crux is that, for instance, a naive extension of the `Would' counterfactual quantifies existentially over these sets of closest worlds, meaning its enough if there is one path from W to worlds satisfying φ where the closest worlds then also satisfy ψ. But there may be other paths to φ where the closest worlds do not satisfy ψ. A similar problem exists with Lewis' semantics of the `Might' counterfactual. We argue that for both operators, the naive extensions to non-total similarity relations do not match the intended semantics of the counterfactual statements they are supposed to formalize, and propose fixed semantics for non-total similarity relations. Another common requirement for causes is a notion of minimality <cit.>, such that negating the cause describes the minimal changes necessary to avoid the effect. This notion is not covered by Lewis' original theory. To enable our logic to express the minimality condition, we introduce minimal counterfactual operators. The intuition behind a minimal counterfactual such as φψ is that it is meant to minimize the path from the reference world to the counterfactual worlds satisfying ψ. This boils down to a second-order requirement that quantifies over properties φ' to see whether some of them characterize a superset of φ and still qualify in the counterfactual. Since the second-order quantification is guarded by the minimal counterfactual operators, we can eliminate it by giving equisatisfiable first-order formulas that only quantify over traces. We show that with an appropriate choice of underlying universe, can express several notions of causality proposed in previous literature, and use it to forge an interesting link between event-based actual causation and property-based counterfactual causation. Contributions. In summary, our contributions are as follows: * We extend Lewis' theory of counterfactual conditionals to non-total similarity relations (Section <ref>) by proposing two additional counterfactual conditionals that capture the intended semantics of `Would' and `Might' on these relations (Section <ref>). * We study a minimality criterion for counterfactuals that captures necessary reasoning for causal analysis and introduce minimal counterfactual operators (Section <ref>). * We build a logic that mixes the classic counterfactuals due to Lewis and our newly proposed counterfactual conditionals with temporal properties expressed in , and show that the corresponding satisfiability and trace-checking problems are decidable (Section <ref>). Necessary preliminaries on temporal logics are introduced in the following section (Section <ref>), and Lewis' original counterfactual conditionals are discussed in Section <ref>. Related Work. Our theory is an extension of Lewis' theory of counterfactuals <cit.> both in terms of the language of the antecedents and consequents, as well as for more general similarity relations and reasoning about minimality. Previous works in the context of axiomatizing causal modeling have extended the language of consequents to arbitrary Boolean formulas <cit.> as well as to counterfactual consequents <cit.>. Our work lifts the language of cause and effect to restricted first- and second-order reasoning in an infinite domain and is in this way related to recent efforts by Halpern and Peters <cit.> on causal reasoning with infinitely many variables. While to our knowledge there has been no previous work extending Lewis' logic of counterfactuals to temporal reasoning, there are works that have proposed some notion of temporal causality in, e.g., Markov Decision Processes <cit.> and reactive systems <cit.>. Several previous works have made a connection between causality and hyperproperties <cit.>. § TEMPORAL LOGICS We consider temporal logics whose models are infinite traces t = t[0] t[1] …∈ (2^𝐴𝑃)^ω over some finite set of atomic propositions 𝐴𝑃. As a basic temporal logic we consider Linear-time Temporal Logic () <cit.>. formulas are built with the following grammar, where a ∈𝐴𝑃: φ a |φ|φφ|φ|φφ . The semantics of are given by the following satisfaction relation, which recurses over the positions i of the trace. [ t,i a iff a ∈ t[i]; t,i φ iff t,i φ; t,i φψ iff t,i φ and π_i ψ; t,i φ iff t,i+1 φ; t,i φψ iff ∃ j ≥ i t,j ψ ∀ i ≤ k < j t,k φ ] We say a trace t satisfies a formula φ iff the formula holds at the first position: t,0 φ, we also write t φ to denote this. We also consider the usual derived Boolean (, →, ↔) and temporal operators (φψ≡(φψ), φ≡φ, φ≡φ). To illustrate how can specify the dynamics of a system, consider an elevator that moves up (u) and down (d) between three floors bottom (b), middle (m), and top (t). We have a set of atomic propositions 𝐴𝑃 = A ∪ S composed of two subsets A = {u,d} for actions and S = {b,m,t} for states. The dynamics of the system starting at the lowest floor can be specified in an formula: φ_𝑒𝑙𝑒𝑣𝑎𝑡𝑜𝑟≡ b ( (b u → m) (b d → g) (m u → t) (m d → g) (t u → t) (t d → m) (t ↮b) (b ↮m) (m ↮t)) . The `Globally' operator universally quantifies over all time points in one sequence, requiring that all of the conjuncts in its body hold. The conjuncts themselves encode the dynamics, e.g., b u → m ensures that when the elevator moves up from the bottom floor, it reaches the middle floor in the next state (which the `Next' operator accesses). The formulas in the last line encode that the elevator can only be on one floor at the same time, together with the others they also ensure that only one action can be done at any time point. The traces that satisfy the formula then describe all the valid dynamics of the system, e.g.: t = {b,u}{m,d}({b,u}{m,d})^ω is a trace where the elevator cycles between the bottom and the middle floor. The ω-superscript symbolizes that this part of the trace is repeated infinitely often. is of practical significance because its corresponding decision procedures are of comparatively low complexity. However, this comes at a cost in expressivity, such that it cannot specify that, e.g., an atomic proposition eventually holds at an odd position. To make our results applicable to as many properties as possible, we therefore consider Quantified Propositional Temporal Logic (), introduced by Sistla <cit.>, throughout the technical sections of this paper. extends by quantification over atomic propositions. Its syntax is built atop of as follows, where q ∉𝐴𝑃 is a fresh atomic proposition, and φ is an formula: ψ∃ q ψ|∀ q ψ|φ . As presented by Finkbeiner et al. <cit.>, the semantics of the formulas quantifying over propositions can be stated using a replacement function t[q ↦ t_q] that given a trace t ∈ (2^𝐴𝑃)^ω and a trace t_q ∈ (2^{q})^ω sets the occurrences of q in t to the ones in t_q, i.e., t[q ↦ t_q] =_{q} t_q and t[q ↦ t_q] =_𝐴𝑃∖{q} t, where t =_E t' means the truth value of the subset E ⊆𝐴𝑃 agrees on all positions of the two traces. [ t_i ∃ q ψ iff ∃ t_q ∈ (2^{q})^ω t[q ↦ t_q] ψ; t_i ∀ q ψ iff ∀ t_q ∈ (2^{q})^ω t[q ↦ t_q] ψ; ] For a formula φ in (and hence also in the fragment ), we denote by ℒ(φ) the set of traces that satisfy it. Quantification over propositions allows limited forms of counting in , such as in the following formula φ_odd that tracks the parity of positions with q and hence can express that the atomic proposition b eventually holds at an odd position: φ_odd≡∃ q ¬ q (¬ q → q) (q b) . Since on trace π from Example <ref> b only holds at even positions we have that πφ_odd. The semantics of the counterfactual conditionals we consider in this work require quantification over the worlds described by their antecedents and consequents. In our case, the worlds are traces. To express quantification over traces, we make use of hyperlogics <cit.>, temporal logics that originated in information-flow security and relate multiple traces to one another. The hyper-counterpart to is  <cit.> and extends the syntax with trace quantifiers over a set of trace variables 𝒱, where π∈𝒱. Since is only used for decidability proofs, and all proofs are found in the appendix due to space reasons, we also state the syntax and semantics of only in Appendix <ref>. § COUNTERFACTUALS We now outline our extended version of Lewis' theory of counterfactual conditionals. First, we extend Lewis' notion of a similarity relation to non-total orders (Section <ref>). Then, we introduce the classic counterfactuals (Section <ref>) and two new operators (Section <ref>), and lastly, we consider a minimality criterion for counterfactual antecedents (Section <ref>). Since the concepts discussed in this section are not only applicable to the traces and temporal logics discussed in the previous section, we will adopt Lewis' modal nomenclature and speak of worlds and properties. We assume a set of worlds 𝒰 called universe. If not stated explicitly otherwise, all quantifiers in this section will quantify over 𝒰. Further, we assume some logic ℒ and a satisfaction relation that tells us for any world W ∈𝒰, whether it satisfies some property φ∈ℒ, which we denote with W φ. We will make the connection to the previous section clear by using traces as worlds and formulas as properties in our concrete examples. §.§ The Distance Between Worlds The semantics of counterfactuals rely on reasoning about the relative similarity of worlds with respect to the reference world in which the counterfactual is evaluated. Directly mapping Lewis' counterfactuals to our setting would necessitate a total preorder over the set of traces to tell us which of any two given traces is closer to our original trace. However, in practice, such a total order is unrealistic not just in our trace-based context, as changes between two worlds may simply be incomparable. For instance, consider that changing atomic proposition u at position 0 has the same quantitative distance as changing it at position 1, but since the identity of these changes differs, the direction in the space of worlds is different. Many instances of counterfactual reasoning, therefore, base their notion of distance on subset relationships between changes, i.e., some world is further away than another if the changes necessary to obtain the former are a superset of the changes necessary to obtain the latter <cit.>. Then, if no subset relationship in either directions holds between the changes manifesting in two worlds, their distance is incomparable. The underlying spatial structure of such a similarity relation is a lattice over the equivalence classes of some preorder. We generalize Lewis' counterfactual reasoning to preorders to account for these more general similarity relations. Formally, we require a comparative similarity relation ≤_W, which is a preorder on 𝒰 such that W is a minimum: ∀ W' W' ≰_W W.[Lewis refines his similarity relation based on a notion of accessibility. As our similarity relation is not necessarily total, accessibility can be easily encoded by not relating in ≤_W the models inaccessible from W. We could then express accessibility of a world W' from W by requiring W ≤_W W', but we will abstract away from this concept for simplicity and assume that W is the unique minimum: ∀ W' W ≤_W W'.] In the context of trace logics as outlined in the previous section, worlds correspond to infinite traces such as t in Example <ref>. Our universe may be given by the language of some formula such as the one describing the dynamics of the elevator system in Example <ref>, so we have 𝒰 = ℒ(φ_𝑒𝑙𝑒𝑣𝑎𝑡𝑜𝑟). A similarity relation may track the changes with respect to the reference trace t over a subset X of atomic propositions and can be formalized as follows: ≤_t(𝑋) = { (t_1, t_2) | ∀ n ∈ℕ∀ x ∈𝑋 t[n] ≠_{x} t_1[n] ⇒ t[n] ≠_{x} t_2[n] } . To illustrate with the elevator system from Example <ref>, the similarity relation ≤_t(𝐴) orders a trace t” = {b,d}{b,d}({b,u}{m,d})^ω, that changes the first two actions in trace t but keeps the other actions the same, as closer to t than trace t' = ({b,d})^ω, which changes them on the whole sequence. We have t”≤_t(𝐴) t'. Note that the above similarity relation is not total and would hence not be covered by Lewis' original theory. §.§ Classic Counterfactuals We start this section by recalling the semantics of Lewis' counterfactual operators and , based on the reformulation for similarity relations <cit.>. In Lewis' theory, the operator is a formalization of `Would' counterfactual statements such as `If the elevator had eventually moved up two times in a row, then it would have reached the top floor.' The operator is a formalization of `Might' counterfactual statements such as `If the elevator had eventually moved up two times in a row, then it might have reached the top floor two steps after the start.' Intuitively, both of the statements should be true on trace t fro Example <ref>. No matter where we change the trace such that the elevator moves upwards twice, it will in all cases end up at the top floor. And there is one instance, i.e., when changing the first two actions appropriately, that it will be at the top floor at the third position. Formally speaking, the distinction between the two statements stems from different quantification over the closest worlds. `Would' counterfactuals are statements that quantify universally over all closest worlds that satisfy the antecedent, while `Might' counterfactuals quantify existentially. There are further subtle differences in the vacuous case which we will discuss after giving the formal semantics. A world W satisfies φψ iff: ∀ W_1 W_1 φ (1lefteq:would) , or ∃ W_1 W_1 φ∀ W_2 W_2 ≤_W W_1 ⇒ W_2 φ→ψ (2righteq:would) . It is worth pointing out that the complex expression in Condition righteq:would of Definition <ref> mainly stems from Lewis' rejection of the Limit Assumption, which poses that for any antecedent φ and world W there exists a unique set of (equally) closest worlds satisfying φ. Then it would be easy to simply quantify over this set universally and require the consequent to hold in all of the worlds. However, in many scenarios, this assumption, unfortunately, does not hold. Instead, there may in fact be an infinite chain of ever closer worlds that satisfy φ. In these instances, what we are rather interested in is finding a `threshold world' after which all closer worlds satisfying the antecedent φ also satisfy the consequent ψ. over formulas allows us to express the mix of counterfactual and temporal statements that is `If the elevator had eventually moved up two times in a row, then it would have reached the top floor.' It yields the following formula: φ_top≡ (u u) t . Interpreted in the universe given by ℒ(φ_𝑒𝑙𝑒𝑣𝑎𝑡𝑜𝑟) and with the similarity relation ≤_t(𝐴), we have that trace t = {b,u}{m,d}({b,u}{m,d})^ω from Example <ref> satisfies φ_top, because no matter where we change actions to ensure two moves upward in a row, we always end up in the top floor (this in fact holds for any trace in the universe). For a more complex formula illustrating that the Limit Assumption does not hold in this setting, consider the statement `If the elevator would eventually only move downwards, then it would eventually stay on the bottom floor.' This corresponds to the formula: φ_bottom≡ ( d) ( b) . Trace t' = ({b,d})^ω satisfies ( d), but there is an infinite chain of traces closer to t that also satisfy ( d): t” = {b,u}{m,d}({b,d})^ω, t”' = {b,u}{m,d}{b,u}{m,d}({b,d})^ω, … and so on. Hence, we cannot avoid Lewis' complex quantification over traces to evaluate temporal counterfactuals in the general case. As one can see from Condition lefteq:would in Definition <ref>, a `Would' counterfactual is vacuously satisfied by a world if there are no related worlds that satisfy the antecedent. In contrast, a `Might' counterfactual strictly requires a world that satisfies the antecedent, mainly because Lewis bases the semantics of the two operators on the following duality law: φψ≡¬(φ¬ψ) <cit.>. This yields the following semantics for the `Might' counterfactual. A world W satisfies φψ iff all of the following holds: ∃ W_1 W_1 φ (1lefteq:might) , and ∀ W_1 W_1 φ⇒∃ W_2 W_2 ≤_W W_1 W_1 φψ (2righteq:might) . Again, significant complexity is introduced into the definition based on the rejection of the Limit Assumption. Here, however, the idea is not to find a `threshold world', but instead to find for any world in the chain of ever closer worlds another one that is closer (or equally close) such that both antecedent and consequent are true. §.§ Counterfactuals Over Non-total Similarity Relations The semantics proposed by Lewis' work well if the similarity relation is a total order. However, as we can see in the following example, the semantics do not match the intuitive meaning of the operators when the similarity relation is not total, as in our setting. Consider the statement `If the elevator had eventually moved up two times in a row, then it would have reached the top floor two steps after the start.', which corresponds to the following formula: φ_top”≡ (u u) ( t) . Intuitively, this statement should not be satisfied by trace t from the previous examples. After all, there is trace t'_1 = {b,u}{m,d}{b,u}{m,u}{t,d}{m,d}t[6]t[7]… that is a closest trace satisfying (u u) and it does not satisfy ( t). Yet, we can simply instantiate the existential quantifier in Condition righteq:would of Definition <ref> with t'_2 = {b,u}{m,u}{t,d}{m,d}t[4]t[5]… . Since there are no traces between t'_2 and t, the condition is satisfied, and we obtain that t φ_top”. Now, consider φ_top”' ≡ (u u) ( t) . Intuitively, this statement should be satisfied, as there are indeed closest worlds that satisfy the consequent, namely t_2'. However, since the semantics of the operator quantifies over all worlds satisfying the antecedent and requires a smaller one that satisfies the consequent for each, the existence of t_1' means that t φ_top”'. Example <ref> shows that on preorders the semantics of is too weak and that of too strong to account for the intended meaning of the operators. Intuitively, `Would' should express that in the set of closest possible worlds, all worlds satisfy the consequent. However, in preorders there may exist multiple incomparable classes of worlds on different chains along the similarity relation. The focal question is how to quantify over these classes, existentially such that one class is enough (like in the naive extension) or universally such that in all classes all models have to satisfy the consequent. Figure <ref> abstractly illustrates the problem with the `Would' counterfactual in Subfigure <ref> and the problem with the `Might' counterfactual in Subfigure <ref>. The areas colored in the dark colors represent the complement of the set characterized by an antecedent that satisfies the corresponding counterfactual in this universe and similarity relation, where the worlds colored in teal satisfy ψ. For example W φ_Wouldψ if the worlds satisfying ¬φ_Would are the ones included in the dark brown area in Subfigure <ref>. Here we can see that there are paths out of this area that lead to closest φ_Would-worlds that do not satisfy ψ, which clashes with the intended semantics of `If φ_Would were true, then ψ would be true as well.' What we instead want to capture is something like the bright brown area: No matter where we leave this area along the similarity relation, we always end up in a ψ-world. We capture this intention with the `Universal Would' counterfactual defined in the following, which requires all closest φ-worlds on any chain in the similarity relation to satisfy ψ. A world W satisfies φψ iff either of the following holds: ∀ W_1 W_1 φ, or ∀ W_1 W_1 φ⇒∃ W_2 W_2 ≤_W W_1 W_2 φ∀ W_3 W_3 ≤_W W_2 ⇒ W_3 φ→ψ . To ground our definition, we can show that it is equivalent to Lewis' classic `Would' counterfactual on total orders. If ≤_W is a total order, then `Universal Would' and `Would' are equivalent, i.e., for all pairs of properties φ and ψ we have that φψ≡φψ. We now introduce a similar counterpart to the `Might' counterfactual. We derive the semantics from a duality law similar to the original one: φψ≡¬ (φ¬ψ). A world W satisfies φψ iff: ∃ W_1 W_1 φ∀ W_2 W_2 ≤_W W_1 W_2 φ⇒∃ W_3 W_3 ≤_W W_2 W_3 φψ . Since we derived the `Existential Might' counterfactual from a similar duality law as Lewis, we can deduce from Proposition <ref> that our `Existential Might' and the original `Might' are also equivalent for total similarity relations. If ≤_W is a total order, then `Existential Might' and `Might' are equivalent, i.e., for all pairs of properties φ and ψ we have that φψ≡φψ. Subfigure <ref> illustrates how this definition captures the intended meaning of `Might' counterfactuals on non-total similarity relations. To illustrate the problem with Lewis' original semantics for `Might': The complement of some satisfying antecedent φ_Might for a classic `Might' counterfactual is colored in dark violet. For satisfaction it is required that no matter where we leave this area, for any world there exists an equally close world satisfying ψ. This means for example, that we can include the worlds furthest to the west and north-west in φ_Might. This is quite strict, our `Existential Might' counterfactual illustrated in bright violet instead allows the closest worlds on a chain to not satisfy ψ, as long as on some chain there exists a closest world satisfying ψ. §.§ Minimal Counterfactuals When reasoning about causation, we are most often interested in some notion of minimality to characterize the minimal changes necessary to avoid a given effect <cit.>. From a counterfactual point of view, minimality formulates an additional condition on the antecedent φ such that the property defines the largest set possible. The question of whether to term this notion minimality or maximality is a matter of perspective. On the one hand, the criterion maximizes the language of φ, but on the other hand, this in fact minimizes the amount of changes necessary to ensure φ, since more worlds in φ mean more opportunities to move into φ earlier when moving along the similarity relation. Since for causation this abstract criterion is usually called minimality, we adopt the same name here. Given a world W and a counterfactual conditional ⇝ ∈{,,,}, the minimal counterfactual conditional φ⇝__𝑚𝑖𝑛ψ is true iff all of the following holds: W φ⇝ψ (1lefteq:mincf) , and ∄ φ' (φ→φ') (φ' ↛φ) W φ' ⇝ψ (2righteq:mincf) . Minimal counterfactuals ensure that the antecedent does not overspecify the changes necessary to get to the consequent. For instance, consider that both of the following statements hold: t u u ( t), and t u ( t) . While the antecedent in the lower statement is more concise because t already has u at the first position, neither is a minimal antecedent, because minimality additionally ensures that all possible antecedents are included, we have: t ( u) ( t) ( t) . For a short argument, consider why there cannot be a trace in the minimal antecedent that does not satisfy ( u) ( t). Such a trace would have no upwards movement at the second position, and all traces closer to t do neither. Since all traces start at the bottom floor, we know that none of the traces in between satisfy ( t). While it may seem odd that the consequent can be a necessary part of the antecedent like in the above example, we note that it is common for the effect to be in the set of its counterfactual causes <cit.>. We believe there is a quite direct connection between minimal counterfactuals, and conjunctive and disjunctive causes from the causal modeling literature. We expand on this later in Section <ref>. As one can see in Definition <ref>, minimality is a second-order statement that quantifies over properties. Hence, the minimal counterfactual conditionals , , , and are essentially guarded second-order quantifiers. However, one of our main insights is that this second-order quantification inherent to these operators can be eliminated such that for any minimal counterfactual conditional there exists an equisatisfiable formula without quantification over properties. We prove this in the following lemma and will use this in the following section for concrete decidability results. For a minimal counterfactual statement φ⇝__𝑚𝑖𝑛ψ, with ⇝ ∈{,,,}, there exists a parameterized formula φ_𝐹𝑂(W), with W ∈𝒰, that quantifies only over worlds and not properties such that for all worlds W' ∈𝒰: W' φ⇝__𝑚𝑖𝑛ψ iff φ_𝐹𝑂(W') is valid. To illustrate minimal counterfactuals consider the abstract scenario in Figure <ref>. In Subfigure <ref> colored areas correspond to the complements of a satisfying antecedent of a `Universal Would' counterfactual with reference world W. However, this antecedent is not minimal, because the complement of the dashed area is equally part of a `Universal Would' counterfactual satisfied by W. The illustration sums up the following intuition: The worlds encountered by leaving the dashed area that satisfy ψ are now overall closer to W than before. The same idea holds for the more minimal antecedent of the `Existential Might' counterfactual illustrated with the dashed area in Subfigure <ref>. However, since for the `Existential Might' counterfactual it is enough to find just one closest antecedent-world satisfying ψ, we can actually remove all worlds except W, because there is a ψ-world directly adjacent to it. § TEMPORAL COUNTERFACTUALS We now leverage the semantic insights garnered on counterfactuals in the previous section to design a logic for expressing notions of counterfactual causality in the temporal domain. For this, it is not enough to solely evaluate the truth values of counterfactual conditionals or even their minimal versions, since causality commonly places additional conditions for a causal relationship to hold. For instance, cause and effect may have to be satisfied in the reference world <cit.>. The syntax and semantics of our logic are introduced in the following section, where we also establish decidability of satisfiability and trace checking. In Section <ref>, we close by illustrating how our logic can express definitions of causality proposed in previous literature. §.§ QPTL With Counterfactuals We consider a logic that builds Boolean combinations of formulas and the proposed counterfactual conditionals relating statements. Nesting of counterfactuals is not allowed, although may be interesting to explore in future work to study temporally structured interventions. We call our logic , for with counterfactuals, and its formulas are built according to the following grammar: ξ  ψψ|ψψ|ψψ|ψψ|ψψ| ψψ|ψψ|ψψ|ξξ|¬ξ|ψ , where ψ is a formula. The semantics of the counterfactual conditionals are as discussed in the previous section where worlds are now infinite traces and the satisfaction relation is as for , and the semantics of the -formulas are as described in Section <ref>. Now that we have fixed the syntax and semantics of our logic , we show that important decision problems can be automatically decided. Note that previous results on Lewis' original two counterfactuals and related works on causal analysis either consider theories with a finite set of models <cit.>, endorse the problematic limit assumption <cit.>, or both <cit.>. In this sense, our results extend previous work in several orthogonal directions. We start by considering the satisfiability problem of . Satisfiability denotes the problem of deciding whether there exists a trace that satisfies a given formula. Deciding the satisfiability of counterfactuals of course depends on whether the similarity relation can be expressed in a decidable logic. We, therefore, show satisfiability of minimal counterfactuals with respect to the following family of -expressible similarity relations: ≤^ = { R ⊆((2^𝐴𝑃)^ω)^3 | ∃-formula φ_R over 𝐴𝑃_{π_1,π_2,π_3}∀ t_1,t_2,t_3 ∈ (2^𝐴𝑃)^ω R(t_1,t_2,t_3) iff t_1 t_2 t_3 φ_R }, where 𝐴𝑃_P = {a_π| a ∈𝐴𝑃π∈ P} indexes the atomic propositions with trace variables and t_1 t_2 t_3 denotes the disjoint fusion of the three traces: We have for all positions n ∈ℕ that the following holds: (t_1 t_2 t_3)[n] = {a_π_1| a ∈ t_1[n] }∪{a_π_2| a ∈ t_2[n] }∪{a_π_3| a ∈ t_3[n] }. Note that the similarity relations ≤^_t(X) introduced in Example <ref> are subsumed by this family, and also the distance metric considered by Coenen et al. <cit.>. We consider three-place relations here because the first place models the W-subscript of the relation ≤_W. Satisfiability of formulas in can then be decided with an idea roughly similar to the reduction of satisfiability to model checking proposed by Rozier and Vardi <cit.>. However, since there is no corresponding trace-based logic that can express the semantics of counterfactual conditionals, we instead encode the problem as a model-checking problem over a model containing all possible traces over our set of atomic propositions 𝐴𝑃. In the end, since model checking is decidable <cit.>, so is the satisfiability of -formulas. This approach is interesting from a theoretical point of view, because satisfiability checking itself is undecidable. We leverage the fact that models of formulas are traces and not sets of traces. While the logic does have relational semantics, these are effectively guarded through the counterfactual conditionals, allowing us to encode into model checking and avoid the undecidable satisfiability problem. For any formula φ, it is decidable to check whether φ is satisfiable when the similarity relation is from the family ≤^ and the universe is defined by a formula. Since our logic contains negation, this also covers the problem of checking validity of a formula in the usual way, as a formula is valid if and only if its negation is not satisfiable. Similarly, because our logic subsumes , we can also model-check -properties on any system that can be expressed in , which covers many practically relevant finite-state systems. This can be done by checking satisfiability of the conjunction of the system formula and the negation of the property formula (and possibly choosing the system as the universe). Lastly, we consider the problem of checking whether some -formula holds on a given trace. Since we need a finite representation of the infinite trace to feasibly compute this truth value, we consider lasso-shaped traces, i.e., traces of the form t = t_0 … t_i(t_i+1… t_j)^ω that ultimately repeat some loop in the infinite ω-part. The proof of the theorem then follows a similar idea as the satisfiability proof, except that we do not search for any satisfying trace and instead fix the corresponding trace variable to the lasso-shaped trace. For a lasso-shaped trace t and a formula φ, it is decidable to check whether t φ when the similarity relation is from the family ≤^ and the universe is defined by a formula. Theorem <ref> and Theorem <ref> show that it is possible to build tools that automatically check whether causal relationships hold in a given finite-state system, or whether causal relationships expressed in our logic are present on a given trace. While an exact complexity analysis is out of scope of this paper, has non-elementary complexity that scales in the number of propositional quantifier alternations. The complexity of our decision procedure for as of now additionally scales in the number of counterfactual conditionals, since these introduce trace quantifier alternations and the complexity of model checking scales with this number. We believe there may be a more efficient encoding, as there is no dependence between the trace variables resulting from different counterfactual conditionals and, hence, quantifiers can be rearranged to avoid alternations. However, the purpose of this work is to explore how expressive our temporal logic with counterfactuals can be while retaining decidability. In practice, it may in fact be more feasible to consider fragments of that have a more practical complexity, such as . There exist efficient model checkers <cit.> for its counterpart  <cit.> and, since recently, also for  <cit.>, such that implementing a model checker for or a fragment modulo using our outlined encoding is feasible. §.§ Examples We close by illustrating how can be used to express several notions of counterfactual causality proposed in previous literature. We start with Lewis' account of counterfactual causation to illustrate how a basic, but limited, notion of causation can be expressed solely with counterfactuals. We then outline at the example of Halpern's (non-temporal) actual causality the important role of proper modeling in causal analysis, and lastly we show how previous work of Coenen et al. on extending Halpern's actual causality to temporal properties is subsumed by our work on temporal-counterfactual reasoning with . Lewis' Counterfactual Causation <cit.>. Based on counterfactual conditionals, Lewis introduced a definition of causality in order to capture Hume's informal account that “we may define a cause to be an object followed by another, where, if the first object had not been, the second never had existed” <cit.>. Lewis only draws causal relationships between two events, which are rather informally defined objects, but which in previous literature on traces are commonly interpreted to mean the value of an atomic proposition at a certain position <cit.>. Lewis definition stipulates that an event c is the cause for an event e if the following condition holds: (c e) (¬ c ¬ e). The intuition behind the formula is that either the cause c and e appear together (and hence the left conjunct is also satisfied), and then moving to any closest ¬ c-world is sufficient to avoid the effect e (corresponding to the left conjunct), or, neither cause and effect appear, and then moving to a c-world is sufficient to bring about the effect (the mirrored case). Consequently, this causal relation holds even when cause and effect are not present in a world, and hence may be model checked on a system to infer whether it holds globally. With , we can improve on Lewis' original formulation in several ways. First, we can use the `Universal Would' counterfactual and hence need not assume a total similarity relation. Further, Lewis' original logic lacks facilities for temporal reasoning and, hence, cannot express that a cause is “an object followed by another”. In , it is possible to express that the cause has to preceed the effect, and the remaining temporal requirements, in the following way: (( c e) → (¬ e c)) ( c e) (¬ c ¬ e) . We can deduce causal relationships by checking this formula for validity in a universe of interest. It ensures that if the cause and effect appear on some trace, then, particularly, the cause happens before the effect. While this solves the lacking temporal expressivity in Lewis' logic, it still shares the idiosyncratic features of this definition of causality which have been raised in the literature since. For instance, if a cause has two effects, the earlier effect is also considered a cause for the later effect. These peculiarities cannot be solved through counterfactual reasoning alone and over the years several solutions in the causal modeling literature have emerged, for instance interventions and contingencies. In the following paragraph, we show that our framework can emulate these concepts by modeling their mechanics in the universe and the similarity relation. Halpern's Actual Causality <cit.>. We outline a direct correspondence of our approach to Halpern's modified version <cit.> of actual causation <cit.>, for binary variables. The exact definition of this notion of causality is given in Appendix <ref> due to space reasons. The general idea is that the dynamics of the universe are defined by a structural equation f_x for every variable x, which may depend on other variables and a set of external variables which are defined by a context. We assume these to be Boolean functions in the case of binary variables. Halpern restricts the analysis to acyclic dependencies. Based on some model ℳ = (𝒮,ℱ) and context c we may assume an evaluation function U^ℳ_c(x) which tells us for any variable x ∈ V whether it evaluates to 1 or 0. To establish a correspondence to , we construct a universe for a specific context under analysis based on basic equivalences for every variable: x ↔ ((f_x ¬ i_x) c_x) if U^ℳ_c(x) = 1, else x ↔ ((f_x i_x) ¬ c_x). If some f_x depends on external variables, we substitute them by the appropriate Boolean constant depending on the context. Then, 𝒰(ℳ,c) denotes the conjunction of all of these equivalences together with (⋀_v ∈ V¬ v ¬ i_v ¬ c_v), i.e., all traces have only empty sets after the first position. This suffices because actual causation has no particular temporal aspect, so we may effectively model all outcomes of the structural equations on a finite trace prefix of length one. Let t^ℳ_c ∈ℒ(𝒰(ℳ,c)) be the unique trace where for all n > 0: t^ℳ_c[n] = ∅ and for all x ∈ V: {i_x,c_x}∩ t^ℳ_c[0] = ∅, i.e., no interventions or contingencies are present. It may seem problematic that we have to construct the universe based on given observations, but note that even in structural equations, certain equations have to be manipulated for modeling interventions and contingencies. X_0 = x^0_0 … x^k_0 ,…, X_n = x^0_n … x^j_n are actual causes of φ in (ℳ,u⃗), iff t^ℳ_c φ_X φ¬φ_X ¬φ in the universe 𝒰(ℳ,c) with respect to the similarity relation ≤_t({i_x | x ∈ V}) that tracks only the active interventions between traces, where ¬φ_X = (¬ x^0_0 …¬ x^k_0) … (¬ x^0_n …¬ x^j_n) is in Blake canonical form <cit.>. The intuition behind the proof is that the closest traces to t^ℳ_c that satisfy ¬φ_X require negating one of the disjuncts, and hence require flipping the values of all of the variables appearing there. This simulates interventions on these variables. Existential search for contingencies is taken care of by the `Might' counterfactual modality, since all possible contingencies for some intervention form an equivalence class under ≤_t({i_x | x ∈ V}). The minimality of the counterfactual ensures that ¬φ_X cannot be enlarged. This means neither can a conjunct be added to φ_X, and hence indeed all causes are described, nor can a disjunct be removed, and hence all causes are minimal. We give an illustrative example of this encoding in the following. Consider the classic example of determining whether lightning (l) or a dropped match (m) caused a forest fire (f). Assume that only the match was dropped and that this suffices to cause the fire because the equation for the fire is f := l m and the other two variables are determined by outside factors. We construct our universe based on the concrete observations f = 1, l = 0, and m=1 and model interventions and contingencies accordingly: 𝒰_𝑓𝑖𝑟𝑒(ℳ,c) = (l ↔ (i_l ¬ c_l)) (m ↔ (¬ i_m c_m)) (f ↔ (((l m) ¬ i_f) c_f)) . Note that the actual causes according to Halpern's definition in this scenario are f = 1, i.e., the effect itself, and m = 1. We have that the formula f m qualifies as a cause for the effect f on the trace t_𝑓𝑖𝑟𝑒 = {f,m}∅^ω describing the above scenario. This is because we have t_𝑓𝑖𝑟𝑒 f m (¬ f ¬ m) ¬ f in 𝒰_𝑓𝑖𝑟𝑒(ℳ,c). All conjunctions appearing in the Blake canonical form in Theorem <ref> are prime implicants. Dubslaff et al. <cit.> have used these before to compute feature causes, which are counterfactual in nature, in configurable software systems. We use prime implicants here to establish an intriguing formal connection between property-based causes and event-based causes in structural equation models. This result is valuable because event-based causes have a restrictive and explicit syntax. The formula of Theorem <ref> allows characterizing the same counterfactual reasoning with more expressive languages for causes and provides a formal basis for property-based extensions of actual causality. We will discuss such an extension by Coenen et al. in the following paragraph. Coenen et al.'s Temporal Causality <cit.>. Coenen et al. extend interventions and contingencies to reactive systems described by Moore automata, and in this way lift Halpern's actual causality to temporal properties on traces. The key idea is that, if the reference trace t is given in a lasso-shape, then the behavior of contingencies can be modeled in a finite-state machine called counterfactual automaton 𝒞_t^𝒯, where 𝒯 is the original Moore automaton. This corresponds to the construction for finite settings in the above paragraph. Interventions need not be modeled explicitly because Coenen et al.'s causality only characterizes causes on the input sequence of the traces and as there are no dependencies between inputs, just changing them outright suffices to model interventions. Therefore, the concrete distance metric used in that work is a modification of ≤_t(ℐ), where ℐ is the set of inputs. The modification is done in order to satisfy the Limit Assumption, but results in a coarse over-approximation of the set of closest traces. For instance, for t = {a}^ω, the closest traces satisfying ¬ a with Coenen et al.'s similarity relation are the whole set ℒ(¬ a), while our work does not require the Limit Assumption to hold and can use the unmodified ≤_t(ℐ), which has an infinite chain of ever closer traces: {}^ω,{a}{}^ω,{a}{a}{}^ω, and so on. It turns out that when we characterize Coenen et al.'s definition of causality in , we see a minor divergence from Halpern's actual causality. We can encode Coenen et al.'s causality in this way: φ is a temporal cause for ψ on t, iff t φψ((¬φ¬ψ) (¬φ¬ψ)) . in the universe defined by the traces of 𝒞_t^𝒯. We include the `Universal Would' counterfactual because we need to make use of its vacuous satisfaction mechanics, i.e., ⊤ may qualify as a temporal cause according to Coenen et al.'s definition by virtue of quantification over an empty set, but will never work as an antecedent in , since there needs to be a trace satisfying it. We can use the compositional nature of to emulate this by using , because it is, except for the vacuous case, a stronger condition, i.e., whenever ¬φ¬ψ holds on a trace t in a universe and a ¬φ-trace exists, also ¬φ¬ψ holds on trace t. We believe that the compositionality of a logic like can be a useful tool for comparing different definitions of causality, as demonstrated here between Halpern's actual causality and Coenen et al.'s temporal causality. This process can even be automated with the outlined decision procedures. § CONCLUSION In this paper, we study a fusion of two prominent flavors of modal logic: counterfactual and temporal reasoning. Our theoretical results are a step towards the automatic evaluation of temporal counterfactual conditionals on infinite sequences, such as counterexample traces returned by a model checker or trajectories of a reinforcement learning agent. Further, our extension of Lewis' theory of counterfactual conditionals to non-total similarity relations and our minimal counterfactual operators are relevant to the theory of counterfactuals beyond the temporal reasoning considered in this work. In the future, we plan on using our logic to define system-level and trace-level causation in reactive systems, which correspond to the notions of global and actual causation. An interesting question here is, whether system-level semantics of counterfactuals should be a universal application of the trace semantics, or should counterfactually relate different system models. The latter approach may utilize previous work on system mutations <cit.> studied in the area of coverage <cit.>, which has a tight relationship to counterfactual causality <cit.>. We are also interested in automating the discovery of causal relationships between temporal properties on infinite sequences. This problem can be framed as synthesizing satisfying antecedents of counterfactuals. abbrv § ADDITIONAL PRELIMINARIES §.§ HyperQPTL The syntax of  <cit.> is given as follows, where ψ is a formula. For sake of simplicity, we only present a fragment of without alternation of propositional and trace quantification. This fragment suffices for the results presented in this work. χ∃πχ|∀πχ|ψ . The semantics of are not defined over single traces, but sets of traces. The alphabet of atomic propositions 𝐴𝑃_ = {a_π| a ∈𝐴𝑃π∈𝒱}, therefore, is indexed by trace variables. The satisfaction relation is defined with respect to a time point i, a set of traces  and a trace assignment : → that maps trace variables to traces. To update the trace assignment so that it maps trace variable π to trace t, we write [π↦ t]. We further lift our replacement function t[q ↦ t_q] to sets of traces such that T[q ↦ t_q] = {t[q ↦ t_q] | t ∈ T }. [ ,i _ a_π iff a ∈(π)[i]; ,i _ q iff ∀ t ∈ q ∈ t[i]; ,i _φ iff ,i _φ; ,i _φψ iff ,i _φ and ,i _ψ; ,i _φ iff ,i+1 _φ; ,i _φψ iff ∃ j ≥ i , j _ψ ∀ i ≤ k < j ,k _φ; ,i _∀πφ iff ∀ t ∈[π↦ t],i _φ; ,i _∃πφ iff ∃ t ∈[π↦ t],i _φ; ,i _∀ q φ iff ∀ t_q ∈ (2^{q})^ω,i _[q ↦ t_q] φ; ,i _∃ q φ iff ∃ t_q ∈ (2^{q})^ω,i _[q ↦ t_q] φ; ] A set of traces T satisfies a formula φ iff ∅,0 _T φ, which we also denote by T φ. With , we can relate traces to one another. For instance, the following formula characterizes sets where all pairs of traces (and hence all traces) have the same action sequence: ∀π∀π' (u_π↔ u_π') (d_π↔ d_π') . It is satisfied by the singleton set {t} with trace t from Example 1, but not by {t,t'} where t' = ({b,d})^ω, since, e.g., the actions at the first position of the traces differ. §.§ Actual Causality Actual causality was originally proposed by Halpern and Pearl <cit.>. Several improvements have been appeared since, we consider the latest of these proposed by Halpern <cit.>. A causal model ℳ = (𝒮,ℱ) is defined by a signature 𝒮 and set of structural equations ℱ. A signature 𝒮 is a tuple (E,V,R), where E is a set of exogenous variables, V is a set of endogenous variables, and R defines the range of possible values R(Y) for all variables Y ∈ E∪ V. For some context u⃗, the value of an exogenous variable is determined by factors outside of the model, while the value of some endogenous variable X is defined by the associated structural equation f_X ∈ℱ. X⃗ = x⃗ is an actual cause of φ in (ℳ,u⃗), if the following holds. * AC1: (ℳ,u⃗) X⃗ = x⃗ and (ℳ,u⃗) φ, i.e., cause and effect are true in the actual world, and * AC2: There is a set W⃗ of variables in V and a setting x⃗' of the variables in X⃗ such that if (ℳ,u⃗) W⃗ = w⃗, then (ℳ,u⃗) [X⃗←x⃗',W⃗←w⃗ ] ¬φ, and * AC3: X⃗ is minimal, i.e. no subset of X⃗ satisfies AC1 and AC2. In the case of binary variables, we may denote a cause X⃗ = x⃗ with X = x^0 … x^n, where for 0 ≤ i ≤ n a literal x^i is positive if it evaluates to 1 in x⃗, and negative if not. We denote by V(x^i) the variable v ∈ V associated with literal x_i. § PROOFS If ≤_W is a total order, then `Universal Would' and `Would' are equivalent, i.e., for all pairs of properties φ and ψ we have that φψ≡φψ. We prove the equivalence by proving the entailment in each direction separately. So first assume that there exists a world W that satisfies φψ. We show that this world also satisfies φψ. Consider two cases: First let us assume φψ is vacuously satisfied by W such that there exists no world W_1 with W_1 φ. Then W also vacuously satisfies φψ. As the second case, consider the non-vacuous case such that there exists at least one world W' that satisfies φ. Then, φψ is satisfied because, for all worlds W_1 that satisfy φ, there exists an at least equally close world W_2 that also satisfies φ such that all closer worlds W_3 satisfy φ→ψ. Hence, such a W_2 world exists in particular for W', and can serve as a witness for the existential quantifier in the semantics of (Condition 2 of Definition 5). Hence, W φψ. For the entailment in the other direction, assume that there is a world W satisfying φψ. We show that this world also satisfies φψ. Again, if W satisfies φψ vacuously it also satisfies φψ vacuously. We, therefore, assume that there is a world W_1 such that W_1 φ and all at least equally close worlds W_2 satisfy φ→ψ. We now show that this is enough to satisfy Condition 2 of Definition 9. Pick any world W' as an instantiation of the outermost universal quantifier in that condition. If W' φ it trivially satisfies the implication in the quantifiers body, so assume W' φ. Since ≤_W is a total order, we know that either W' ≤_W W_1 or W_1 ≤_W W'. If W' ≤_W W_1, we know that all smaller worlds W”≤_W W' satisfy φ→ψ due to transitivity of ≤_W. Hence, W' is a witness for the existential quantifier in Condition 2 of Definition 9. Lastly, assume W_1 ≤_W W'. Then W_1 is the witness for the existential quantifier. In any case, we have W φψ which closes this direction. counter1 If ≤_W is a total order, then `Existential Might' and `Might' are equivalent, i.e., for all pairs of properties φ and ψ we have that φψ≡φψ. The result follows directly from the two duality laws φψ≡¬ (φ¬ψ) and φψ≡¬ (φ¬ψ), and Proposition 10, through substitution. counter2 For a minimal counterfactual statement φ⇝__𝑚𝑖𝑛ψ, with ⇝ ∈{,,,}, there exists a parameterized formula φ_𝐹𝑂(W), with W ∈𝒰, that quantifies only over worlds and not properties such that for all worlds W' ∈𝒰: W' φ⇝__𝑚𝑖𝑛ψ iff φ_𝐹𝑂(W') is valid. The high-level idea of the proof is that, iff φ is not the minimal antecedent and there, therefore, exists a formula φ' characterizing a superset of the worlds characterized by φ, then there exists a set of worlds S that can be added to φ without changing the truth value of the counterfactual conditional. Such worlds fall into two categories: they are further away from the reference world W than the closest φ-worlds and can be added without considering whether they satisfy ψ, or they are at least equally close to W than the closest φ-world but also satisfy ψ so they could be added without harm and would constitute new closest φ-worlds. The exact relationships these worlds have to satisfy with respect to the closest φ-worlds differ between the operators, further, a difference is whether the reasoning has to be extended to incomparable worlds on other chains in the similarity relation. Therefore, we split the argumentation for the four minimal operators from now. We mark the world-variables that correspond to the closest φ-worlds of interest with W^c in each, and else use variables W' for W_1 to establish a close correspondence to definitions of the semantics. All other world variables are specific to the minimality reasoning. If not stated otherwise, quantifiers quantify over worlds from the universe 𝒰. We have that W satisfies φψ iff ( (∀ W' W' ψ) (∀ W^h ∈𝒰∀ W' ∈𝒰 W^h φ W' φ)) (∃ W^c W^c φ∀ W” (W”≤_W W^c ⇒ W”φ↔ψ (W”ψ⇒∀ W^i W^i ≰_W W”⇒ W^i φ) ) ( W”≰_W W^c W”ψ (∃ W^n W^n ≤_W W^c W^n ≰_W W” W^n ¬φ) ⇒ ∃ W^p W^p ≤_W W” W^p φ) ) is valid. The first disjunct in Line <ref> encodes the fact that iff ψ is unsatisfiable with respect to the universe, then φ should characterize exactly the complement of the universe. The right disjunct encodes that we can enlarge the set characterized by φ based on the following conditions in relation to the counterfactual world W^c (corresponds to W_1 in Condition 2 of Definition 5): * There is an at least equally close or closer world W” such that ψ holds but this world is not included in φ yet (strengthens the semantic by W”ψ→φ as in Line <ref>), or in W” ψ holds but further away or incomparable worlds are not included in φ but could be (Rest of Line <ref>). * There is a world at least as close as W^c which could take its place in a more minimal antecedent, i.e., W” satisfies ψ, all its at-least-as-close worlds are a proper subset of the ones of W^c (Line <ref>) and are not included in φ (Line <ref>). Next, we have that W satisfies φψ iff ( ∃ W'. W' φ) (∀ W' W' φ⇒∃ W^c . W^c ≤_W W' (W^c φψ) ∀ W”' (W^c ≰_W W”' ⇒ W”' ¬φ→¬ψ) (W^c ≤_W W”' ⇒ W”' φ)) is valid. Here, we have a fairly direct strengthening of the semantics of regular `Might' (Line <ref>): We now additionally require for φ-world W' on an infinite chain not only an at least equally closest world W^c (this time corresponds to W_2 in Condition 2 of Definition 7) that satisfies φψ, we also require that φ includes all smaller ψ-worlds and hence place ¬φ→ψ as a requirement on them. If this did not hold, we could include them in the property. Additionally, we could include any world on a different chain (and which is hence not comparable to W^c) if they are not yet included in φ but satisfy ψ. Hence, such worlds also have to satisfy ¬φ→ψ. Both is covered by the left conjunct in Line <ref>. Additionally, we again require that worlds further away than W^c are included in the property (right disjunct in Line <ref>). If they were not we could include them while still retaining W^c as a witness for all of them. Next, we have that W satisfies φψ iff ( (∀ W' W' ψ) (∀ W^h ∈𝒰∀ W' ∈𝒰 W^h φ W' φ)) ( ∃ W^h W^h φ∀ W' W' φ⇒∃ W^c W^c ≤_W W' W^c φ∀ W”' (W”' ≤_W W^c ⇒ W”' φ↔ψ) (W^c ≤_W W”' ⇒ W”' φ) (W”' ≰_W W^c W^c ≰_W W”' ⇒ W”' ¬φ→¬ψ )) is valid. This is now because of a combination from the reasoning of the previous two operators. Like for , we again have that if ψ is unsatisfiable φ has to characterize the complement of the universe (Line <ref>). If this is not the case, then there has to be at least one W^h world satisfying φ (Line <ref>). We have to ensure this because φ may only characterize an effectively empty set if there exists no ψ-world. Further, for any φ-world there exists a φ-world W^c closest to W such that on any closer worlds φ→ψ. Again, like for , we add the inverse direction ψ→φ to ensure that φ cannot be enlarged in this direction (left in Line <ref>). Further, any world further away than W^c has to be included in φ as they can all use W^c as a closest φ-world (right in Line <ref>). Lastly, like for , if some world incomparable to W^c which hence is on a different chain satisfies ψ, it also has to be included in φ which we ensure by requiring ¬φ→¬ψ (Line <ref>). Lastly, we have that W satisfies φψ iff ∃ W' W' φ∀ W” W”≤_W W' W”φ⇒∃ W^c W^c ≤_W W” (W^c φψ) ∀ W^h (W^h ≤_W W^c ⇒ W^h ¬φ→¬ψ )( W^h ≰_W W^c ⇒ W^h φ W^h ψ (∃ W^n W^n ≤_W W^c W^n ≰_W W^h W^n ¬φ) ⇒∃ W^p W^p ≤_W W^h W^p φ) is valid. In Line <ref>, we have the usual semantics of `Existential Might'. We now additionally require that worlds at least as close as the closest φ-worlds W^c that satisfy ψ, which correspond to W_3 in Definition <ref>, are included in φ if they satisfy ψ (left conjunct in Line <ref>). Further, we require all worlds farther away or incomparable to W^c to be included in φ (right in conjunct in Line <ref>). Similar to the reasoning for , we have to ensure that these worlds W^h do not induce a more minimal antecedent by qualifying as a W^c by satisfying ψ (end of Line <ref>) and having a proper subset of at-least-equally-close worlds (Line <ref>). For any formula φ, it is decidable to check whether φ is satisfiable when the similarity relation is from the family ≤^ and the universe is defined by a formula. We sketch how to encode the satisfiability of our -formulas in a model-checking problem over the most general model ℳ that contains all traces over our alphabet 2^𝐴𝑃. Let φ^𝒰 be the formula encoding the universe. The semantics of the non-minimal operators contain quantification over all possible traces over our alphabet 2^𝐴𝑃 and hence over the traces from the model ℳ. In Lemma 14 we showed that the semantics of the minimal operators can similarly be expressed by quantification over traces from ℳ and in φ^𝒰 as well. We now illustrate at the example of how such a formula can be encoded into a -formula φ^_𝐹𝑂(π) in prenex normal form, as even if 𝐴𝑃 = ∅, there exists at least one trace. Let φ_R bet the -formula for the similarity metric R ∈≤^. For some formula φ and trace variable π∈𝒱, φ_π denotes the same formula where all atomic propositions are indexed with trace variable π. Then φ^_𝐹𝑂(π) for some φψ is of the following form: φ^_𝐹𝑂(π) = ∃π_1 ∀π_2 ∃π_3 ∀π_4 φ^𝒰_π_1φ_π_1(φ^𝒰_π_2→(φ_π_2→φ^𝒰_π_3φ_R(π,π_3,π_2) φ_π_3ψ_π_3 (φ^𝒰_π_4→ (¬φ_R(π,π_3,π_4) → (¬φ_π_4→¬ψ_π_4)) (φ_R(π,π_3,π_4) →φ_π_4)))) . We restrict quantification to the universe with conjunctions and implications containing φ^𝒰. Quantification over the complement of the universe uses ¬φ^𝒰 instead. For the original -formula φ, we apply this transformation to any counterfactual conditional appearing in it and again transform the result into prenex normal form. This is possible while retaining equisatisfiability because temporal operators appear only on the lowest level, i.e., no quantifier appears in the scope of a temporal operator. We denote the resulting formula with φ^𝑓𝑢𝑙𝑙_𝐹𝑂(π) and put a π-subscript on any top-level formula (i.e., that was not in the body of a counterfactual operator). We can then model check the formula -formula ∃πφ^𝑓𝑢𝑙𝑙_𝐹𝑂(π) on the model ℳ to check whether there exists a trace satisfying our original -formula. Since model checking is decidable, so is the satisfiability checking of . This approach has similarities to the idea of reducing the satisfiability problem of linear-time temporal logic () to model checking of the same logic <cit.>. Note that, crucially, our approach is possible because the models of our logic are traces and not sets of traces as in , which allows an encoding into the decidable model checking problem while avoiding the undecidable problem of satisfiability checking. For a lasso-shaped trace t and a formula φ, it is decidable to check whether t φ when the similarity relation is from the family ≤^ and the universe is defined by a formula. (Sketch) Since the trace t is lasso-shaped, we encode it in a formula φ^t that characterizes the set {t}. We can then use the ideas of the satisfiability proof above to solve the trace checking problem. However, we now model check the formula ∀πφ^t_π→φ^𝑓𝑢𝑙𝑙_𝐹𝑂(π) on the model ℳ, not searching for any trace satisfying the formula but instead fixing the reference trace to t. X_0 = x^0_0 … x^k_0 ,…, X_n = x^0_n … x^j_n are actual causes of φ in (ℳ,u⃗), iff t^ℳ_c φ_X φ¬φ_X ¬φ in the universe 𝒰(ℳ,c) with respect to the similarity relation ≤_t({i_x | x ∈ V}) that tracks only the active interventions between traces, where ¬φ_X = (¬ x^0_0 …¬ x^k_0) … (¬ x^0_n …¬ x^j_n) is in Blake canonical form <cit.>. This is effectively a finite setting modeled on the first position of a trace, such that the Limit Assumption holds and for any formula ¬φ_X without temporal operators, there exists a unique (finite) set of closest traces from t^ℳ_c that satisfy it. Let I = { i_x | x ∈ V} denote the set of intervention variables. We will use the following auxiliary results throughout the proof: * For any actual cause X_0 = x^0_0 … x^k_0 for effect φ, there is a contingency W such that (ℳ,u⃗) [X←¬ x^0_i …¬ x^k_i,W ← w^0_p … w^q_p ] ¬φ, i.e., the intervention that flips all the variables is a witnessing intervention for AC2. With Boolean variables, this follows directly from minimality, as any variable in an intervention that is not negated could be moved to the contingency W instead, yielding a more minimal cause. * There is a direct mapping from interventions that flip all values to intervention variables on traces: By induction over the number of structural equations we can show that for all traces t ∈𝒰(ℳ,c) and variables v ∈ V, we have (ℳ,u⃗) [X← x^0_i … x^k_i,W ← w^0_p … w^q_p ] v where for all x^r_i ∈ X: (ℳ,u⃗)¬ x^r_i, iff v ∈ t[0], where t is the unique trace that has all the contingency variables of W enabled at the first position, i.e., for all 0 ≤ r ≤ q: V(w^r_p) ∈ t[0], and all intervention variables corresponding to literals in X are also enabled: for all 0 ≤ r ≤ k: (x^k_p ↔ V(x^k_p) ∈ t^ℳ_c) → i_V(x^k_p)∈ t[0]. * Let t_𝑐𝑓 be a closest ¬φ_X-trace to t^ℳ_c such that t_𝑐𝑓 (¬ x^0_z …¬ x^j_z) for some conjunction and t_𝑐𝑓¬φ. We can use induction over the length of the conjunction to show that for all i ∈ I: i ∈ t_𝑐𝑓 [0] iff there is some 0 ≤ r ≤ z: i_V(x^r_z)= i c_V(x^r_z)∉t_𝑐𝑓 [0], i.e., exactly the intervention variables corresponding to literals in the conjunction are enabled on t_𝑐𝑓, while all corresponding contingency variables are disabled. We now proceed to proof the two directions of the equivalence separately. “⇒”: We know that no intervention or contingency variables are true on t^ℳ_c by construction. With Result <ref> and the fact that X_0,…, X_n are actual causes of φ in (ℳ,u⃗), it follows from AC1 that t^ℳ_c φ_X φ. To show that ¬φ_X ¬φ is satisfied, consider that from Result <ref> and Result <ref>, it follows the closest traces from satisfying ¬φ_X have to differ in all of the values in at least one of the conjuncts of φ_X, w.l.o.g. assume this to be (x^0_0 … x^n_0). With Result <ref> we have (ℳ,u⃗) [X_0←¬ x^0_i …¬ x^k_i,W_0 ← w^0_p … w^q_p ] ¬φ, and with Result <ref> it then follows that there is a trace t such that t ¬φ, and from Result <ref> it follows that t is a closest ¬φ_X-trace. Hence, We have t^ℳ_c ¬φ_X φ. To show that ¬φ_X is also a minimal antecedent, consider what would happen if a trace was added to ℒ(¬φ_X). Such a trace t” has to satisfy φ_X, and hence satisfies one of the literals in each of that formula's conjuncts. That means that the changes between t^ℳ_c and t” are either incomparable to any other closest ¬φ_X-trace, or a proper subset. In the former case, this induces another actual cause, in the latter case, this would mean one of the causes is not minimal. In both cases, we have a contradiction. Note that ¬φ_X is in Blake canonical form because all of the causes are minimal, and hence prime implicants, i.e., no X_v implies another X_w, and this also holds for their negations in ¬φ_X. This closes this direction “⇐”: We first show AC1 holds for all of the causes. We know that (ℳ,u⃗) φ from t^ℳ_c φ and Result <ref>. To show that t^ℳ_c (x^0_i … x^u_i) for all X_i, consider t_𝑐𝑓 as defined in Result <ref>. Since we know from the proof of Result <ref> that on t_𝑐𝑓 exactly the intervention variables corresponding to the literals in the conjunction are set, and intervention variables flip the value of variables with respect to t^ℳ_c, we can deduce that all the literals occurring in the conjunction have the inverted value in t^ℳ_c, which proves AC1 for all of the causes using Result <ref>. AC2 follows directly from t^ℳ_c ¬φ_X φ, Result <ref> which states on some closest trace t_𝑐𝑓¬φ all literals x^0_z … x^j_z are intervened upon, and Result <ref> which relates the intervention and contingency variables w^0_z … w^q_z of t_𝑐𝑓 to the causal model (ℳ,u⃗). Then, we can deduce that (ℳ,u⃗) [X ←¬ x^0_z …¬ x^j_z, W ← w^0_z … w^q_z]¬φ. Lastly, we can show AC3 by contradiction: If one of the causes was not minimal, then Result <ref> and Result <ref> imply there is closer ¬φ-trace, and therefore ¬φ_X is not a minimal antecedent, a contradiction.
http://arxiv.org/abs/2306.03735v1
20230606145230
A Scotogenic explanation for the 95 GeV excesses
[ "Pablo Escribano", "Victor Martin Lozano", "Avelino Vicente" ]
hep-ph
[ "hep-ph" ]
empty footnote IFIC/23-20 A Scotogenic explanation for the 95 GeV excesses Pablo Escribano^a [mailto:[email protected]@ific.uv.es] , Víctor Martín Lozano^a,b [mailto:[email protected]@ific.uv.es] , Avelino Vicente^a,b [mailto:[email protected]@ific.uv.es] (a) Instituto de Física Corpuscular, CSIC-Universitat de València, 46980 Paterna, Spain (b) Departament de Física Teòrica, Universitat de València, 46100 Burjassot, Spain Several hints of the presence of a new state at about 95 GeV have been observed recently. The CMS and ATLAS collaborations have reported excesses in the diphoton channel at about this diphoton invariant mass with local statistical significances of 2.9 σ and 1.7 σ, respectively. Furthermore, a 2 σ excess in the b b̅ final state was also observed at LEP, again pointing at a similar mass value. We interpret these intriguing hints of new physics in a variant of the Scotogenic model, an economical scenario that induces Majorana neutrino masses at the loop level and includes a viable dark matter candidate. We show that our model can naturally explain the 95 GeV excesses while respecting all experimental constraints and discuss other phenomenological predictions of our scenario. footnote § INTRODUCTION The Standard Model (SM) of particle physics has been an incredibly successful framework for understanding the fundamental particles and forces that make up our Universe. However, it faces significant challenges when it comes to explaining two crucial phenomena: neutrino masses and dark matter (DM). Neutrinos were long thought to be massless, as suggested by the original formulation of the SM. However, experimental evidence has now firmly established that neutrinos do have masses, albeit very small ones. Similarly, the existence of DM, which is inferred from its gravitational effects on visible matter, poses another major challenge and various theoretical extensions of the SM have been proposed to account for it. Explaining the origins and properties of neutrino masses and DM continues to be an active area of research. After the discovery of the Higgs boson, which proved the existence of a scalar field and provided important insights into the mechanism of mass generation, the LHC has continued to search for additional scalar particles. In fact, many Beyond the Standard Model (BSM) scenarios include new scalar states. This is also the case of models addressing the neutrino and DM problems, which typically require extended scalar sectors. If these new states have masses and couplings within the reach of the LHC, their signals may be hidden in the currently existing searches or show up in the near future, and may appear at lower or higher masses than the scalar found at the LHC at a mass of 125 GeV. If lighter states exist, they may be produced given the high energies available at colliders. In fact, different experiments have performed searches for low mass scalars in different channels <cit.>. The diphoton channel plays a crucial role in the search for new scalar particles at the LHC. This final state allows for precise measurements and clean experimental signatures, making it easier to isolate potential signals of new scalar particles amidst background noise. Interestingly, the CMS collaboration has been consistently finding an excess over the SM prediction in this channel at a diphoton invariant mass of ∼ 95 GeV <cit.>. The statistical support for this excess has been reinforced by recent results obtained after the analysis of the full Run 2 dataset <cit.>. The excess is maximal for a mass of 95.4 GeV and has a local (global) significance of 2.9 σ (1.3 σ) and can be interpreted as the production via gluon fusion and subsequent decay of a new scalar state, h_95. It can be parametrized numerically in terms of the μ_γγ signal strength, which normalizes the cross section of the process to the analogous cross section for a Higgs-like state H at the same mass. The latest CMS result points to <cit.> μ^ CMS_γγ = σ^ CMS(gg→ h_95→γγ)/σ^ SM(gg→ H→γγ) = 0.33^+0.19_-0.12 . The ATLAS collaboration has also performed searches in the diphoton channel, although with a lower sensitivity. A very mild excess in a mass region compatible with that hinted by CMS was found in their Run 1 analysis <cit.>. Their update including 140 fb^-1 of Run 2 data appeared recently <cit.>. Intriguingly, the statistical significance of the excess increases in the new ATLAS results. This can be attributed to the addition of more statistics as well as to several improvements in the analysis. In particular, the model-dependent analysis presented in this update hints at an excess, curiously at 95.4 GeV too, with a local significance of 1.7 σ. We note that this result is compatible with that of CMS, which ATLAS cannot exclude. The 95 GeV region is particularly interesting due to the existence of other excesses hinting at similar mass values. LEP has reported an excess in bb̅ production at about 95 GeV with a local significance of 2 σ <cit.>. This excess can be interpreted in terms of a new scalar state contributing to the process e^+e^- → Z h_95→ Z bb̅, with a signal strength given by <cit.> μ^ LEP_bb=σ^ LEP(e^+e^-→ Z h_95→ Z bb̅)/σ^ SM(e^+e^-→ Z H→ Z bb̅)=0.117 ± 0.057 . Other searches for light scalars in CMS also gave a small excess in the ditau channel μ_ττ^ CMS=1.2± 0.5 <cit.>. ATLAS has not published any ditau search in this mass region, but has only provided results for scalar masses above 200 GeV <cit.>. The 95 GeV excesses have received some attention recently <cit.>. We interpret them in a variant of the Scotogenic model <cit.>, a well-motivated and economical BSM scenario that incorporates a mechanism for the generation of neutrino masses and provides a testable DM candidate. We thus consider the possibility that these excesses are the first collider hints of a new BSM sector addressing some of the most important open questions in particle physics. The rest of the manuscript is organized as follows. Section <ref> introduces our model, whereas Section <ref> interprets the 95 GeV excesses in terms of a new scalar state in the particle spectrum. The most relevant experimental constraints are discussed in Section <ref> and our numerical results, which prove that our setup can accommodate the excesses, are presented in Section <ref>. Other aspects of our scenario, such as neutrino masses, DM and additional collider signatures, are discussed in Section <ref>. Finally, we summarize our work in Section <ref>. An Appendix is also included with some technical details. § THE MODEL We consider a variant of the Scotogenic model <cit.> that extends the SM particle content with n_N generations of singlet fermions N_n, n = 1, …, n_N, and n_η doublet scalars η_a, a = 1, …, n_η. These fields are assumed to be odd under a new 2 symmetry, under which all the SM states are even. This generalizes the original Scotogenic model <cit.>, which corresponds to n_N=3 and n_η=1. The η_a doublets can be decomposed in terms of their SU(2)_L components as η_a = ( [ η_a^+; η_a^0 ]) . In addition, we include a real singlet scalar S. The lepton and scalar particle content of the model is summarized in Tab. <ref>. The Yukawa Lagrangian of the model includes the terms ℒ⊃ y_n a α N_n η_a^†ℓ_L^α - κ_nm S N^c_n N_m - 1/2(M_N)_nnN^c_n N_n + , where n,m=1,…,n_N, a=1,…,n_η and α=1,2,3 are generation indices. The Yukawa coupling y is an n_N × n_η× 3 object while κ and M_N are n_N × n_N symmetric matrices. M_N has been chosen diagonal without loss of generality. Finally, we define η = i σ_2 η^*. The scalar potential of the model can be written as 𝒱 = 𝒱_H + 𝒱_η + 𝒱_S + 𝒱_ mix , with 𝒱_H = m_H^2 H^† H + 1/2λ_1 (H^† H)^2 , 𝒱_η = (m_η^2)_a aη_a^†η_a + 1/2λ_2^a b c d(η_a^†η_b)(η_c^†η_d ) , 𝒱_S = 1/2 m_S^2 S^2 + 1/3μ_S S^3 + 1/4λ_S S^4 , 𝒱_ mix = λ_3^a b(H^† H)(η_a^†η_b)+λ_4^a b(H^†η_a)(η_b^† H) +1/2[λ_5^a b(H^†η_a)(H^†η_b)+ ] + μ_H H^† H S + 1/2λ_HS H^† H S^2 + μ_η^a b η_a^†η_b S + 1/2λ_η S^a b η_a^†η_b S^2 . Here all the indices are η generation indices and then m_η^2, λ_3,4,5, μ_η and λ_η S are n_η× n_η matrices, while λ_2 is an n_η× n_η× n_η× n_η object. We also note that λ_5 must be symmetric whereas λ_3,4, μ_η and λ_η S are Hermitian. Again, m_η^2 will be assumed to be diagonal without any loss of generality. §.§ Symmetry breaking, scalar masses and mixings We will assume that the vacuum of our model is given by ⟨ H^0 ⟩ = v/√(2) , ⟨η^0_a ⟩ = 0 , ⟨ S ⟩ = v_S . The vacuum expectation value (VEV) v breaks the electroweak symmetry in the usual way. In contrast, the 2 Scotogenic parity remains exactly conserved due to ⟨η_a^0 ⟩ = 0. The VEV configuration in Eq. (<ref>) imposes some conditions on the scalar potential parameters due to the minimization equations .∂ V/∂ H|_⟨ H ⟩ = v/√(2) , ⟨ S ⟩ = v_S = m_H^2 v + μ_H v v_S + 1/2λ_1 v^3 + 1/2λ_HS v_S^2 v =0 , .∂ V/∂ S|_⟨ H ⟩ = v/√(2) , ⟨ S ⟩ = v_S = m_S^2 v_S + μ_S v_S^2 + 1/2μ_H v^2 + λ_S v_S^3 + 1/2λ_HS v^2 v_S = 0 . After symmetry breaking, the real component of the neutral H^0 field mixes with the real S field. In the basis ℋ = { S , Re(H^0) } their mass matrix reads ℳ_ℋ^2 = ( m_S^2 + 2 μ_S v_S + 1/2λ_HS v^2 + 3 λ_S v_S^2 μ_H v + λ_HS v v_S μ_H v + λ_HS v v_S m_H^2 + μ_H v_S + 3/2λ_1 v^2 + 1/2λ_HS v_S^2 ) . After application of Eqs. (<ref>) and (<ref>), solved for m_H^2 and m_S^2, this matrix becomes ℳ_ℋ^2 = ( v_S ( μ_S + 2 λ_S v_S ) - μ_H v^2/2 v_S μ_H v + λ_HS v v_S μ_H v + λ_HS v v_S λ_1 v^2 ) . It can be brought to diagonal form as V_ℋ ℳ^2_ℋ V_ℋ^T = ℳ^2_ℋ = diag(m^2_h_1 , m^2_h_2), where h_1 and h_2 are mass eigenstates and V_ℋ = ( cosα sinα - sinα cosα) , with tan 2α = 2 ( ℳ^2_ℋ)_12/( ℳ^2_ℋ)_11-( ℳ^2_ℋ)_22 . This mixing angle between the singlet and doublet scalars plays a central role in the phenomenology of our model, as we will explain in the following Subsection. We focus now on the 2-odd scalars η^+_a and η^0_a. We decompose the neutral components of the η_a doublets as η_a^0 = 1/√(2) (η_R_a + i η_I_a) , and they do not mix if we assume that CP is conserved in the scalar sector. This can be easily achieved if all the parameters in the scalar potential are real. Again, after electroweak symmetry breaking, the n_η× n_η mass matrices are given by ( ℳ_η_R^2 )_ab = (m_η^2)_ab + ( λ_3^ab + λ_4^ab + λ_5^ab) v^2/2 + v_S^2/2λ_η S^ab + μ_η^ab v_S , ( ℳ_η_I^2 )_ab = (m_η^2)_ab + ( λ_3^ab + λ_4^ab - λ_5^ab) v^2/2 + v_S^2/2λ_η S^ab + μ_η^ab v_S , ( ℳ_η^+^2 )_ab = (m_η^2)_ab + λ_3^abv^2/2 + v_S^2/2λ_η S^ab + μ_η^ab v_S . Notice that the mass matrices for the real and imaginary components are the same in the limit in which all the elements of λ_5 vanish. §.§ Scalar couplings The couplings of the h_1 and h_2 scalars to the SM fermions are determined by the α mixing angle which, as discussed below, will be constrained to be small. We note that the singlet S does not have a Yukawa term with the SM fermions. Hence, it can only couple to them via mixing with the SM Higgs H. As a result of this, one of the mass eigenstates (h_1, the mostly-singlet one) couples to the SM fermions proportional to sinα while the other mass eigenstate (h_2, the SM-like one) couples proportional to cosα. The same happens for the coupling to WW and ZZ bosons and the loop coupling to gluons. However, the 1-loop couplings to γγ and γ Z will be affected by the new particle content present in our model, in particular the η doublets. In Fig. <ref> we can see the different contributions to the loop decays h_i →γγ. Similar diagrams can be drawn for the γ Z final state. On the one hand the Higgses will couple to γγ/Z through the SM loops with W bosons and fermions, with the largest contribution among the fermions given by the top quark. On the other hand, the charged η states can also run into the loop and contribute to the decay through the g_h_iηη coupling, given by g_h_iηη = (V_ℋ)_i1 (λ_η S v_S + μ_η) + (V_ℋ)_i2 λ_3 v , or, equivalently, g_h_1ηη = cosα(λ_η S v_S + μ_η) + sinα λ_3 v , g_h_2ηη = - sinα(λ_η S v_S + μ_η) + cosα λ_3 v . §.§ Neutrino masses After symmetry breaking, the n_N × n_N mass matrix of the singlet fermions is given by ℳ_N = M_N + 2 κ v_S . As already explained, one can take the matrix M_N to be diagonal without loss of generality. We will further assume that κ is diagonal too. Then, the singlet fermion masses are simply given by m_N_n = (ℳ_N)_nn = (M_N)_nn + 2 κ_nn v_S. The simultaneous presence of the y and λ_5 couplings and the M_N Majorana mass term (or the κ coupling) leads to explicit lepton number violation. Neutrino masses vanish at tree-level due to the 2 symmetry of the model, that forbids a neutrino Yukawa interaction with the SM Higgs doublet. However, neutrinos acquire non-zero Majorana masses at the 1-loop, as shown in Fig. <ref>. This mechanism is exactly the same as in the original Scotogenic model <cit.>, although our scenario includes a variable number of N and η fields. The general expression for the light neutrinos Majorana mass matrix can be found in <cit.> where it is particularized for specific (n_N,n_η) cases. § INTERPRETATION OF THE 95 GEV EXCESSES In the following we will assume that the lightest 2-even scalar in our model, h_1, has a mass of 95 GeV and is thus identified with the scalar resonance hinted by CMS, ATLAS and LEP precisely this energy scale. Therefore, h_2 is identified with the 125 GeV Higgs discovered at the LHC. In summary, h_1 ≡ h_95 , h_2 ≡ h_125 . We should then study whether our model can accommodate the experimental hints at 95 GeV. In other words, we must determine the regions in the parameter space leading to a h_1 diphoton signal strength in agreement with Eq. (<ref>) that also comply with the existing experimental constraints. We will also explore the possibility to simultaneously explain the other anomalies at 95 GeV, in the b b̅ and ditau channels. The signal strength for the diphoton channel is given in our model by μ_γγ = σ(gg→ h_1)/σ_ SM(gg→ H)× BR(h_1→γγ)/ BR_ SM(H→γγ) = sin^2α BR(h_1→γγ)/ BR_ SM(H→γγ) , where we normalize again to the SM values, the usual suppression by the α mixing angle has been taken into account and BR(h_1→γγ) is the h_1→γγ branching ratio in our model. This is modified with respect to the predicted value for a Higgs-like state with a mass of 95 GeV due to the presence of the η doublets. The decay width of a CP-even scalar to two photons has been studied in great detail <cit.>. With n_η generations of η doublets and assuming diagonal g_h_iηη couplings, it is given by Γ(h_i→γγ) = G_Fα^2 m_h_i^3/128√(2)π^3| ∑_f N_c Q_f^2 g_h_iff A_1/2(τ_f) + g_h_iWW A_1(τ_W) +∑_av/2m^2_η_ag_h_iηη^aa A_0(τ_η)|^2 , where τ_k=m^2_h_i/4m^2_k and the A_i(τ) functions are defined in Appendix <ref>. As we can see, the presence of the η doublets not only modifies the diphoton decay of the mostly-singlet state, but it also affects to the diphoton decay of the SM-like Higgs. As discussed below in Sec. <ref>, this feature constrains the parameter space from the existing measurements of the 125 GeV Higgs. For the b b̅ excess in LEP we must consider the signal strength μ_bb=σ(e^+e^-→ Z h_1)/σ_ SM(e^+e^-→ Z H)× BR(h_1→ bb̅)/ BR_ SM(H→ bb̅)=sin^2α Γ(h_1→ bb̅)/Γ^ tot/Γ_ SM(H→ bb̅)/Γ^ tot_ SM = sin^4α Γ^ tot_ SM/Γ^ tot . In this case, the mixing angle not only suppresses the production cross section, but also the decay width of h_1 → bb̅, which can only take place via singlet-doublet mixing. As we can see from both signal strengths, the main features of this model to explain the signals are, on the one hand, the reduced couplings to fermions and vector bosons given by the mixing of the singlet state with the doublet, that introduce powers of sinα in the observables of interest. This allows (i) to evade the existing limits from LEP and the LHC on light scalars with masses below 125 GeV decaying into SM states and, (ii) to easily accomodate the correct range of values to explain the bb̅ excess at LEP. This occurs easily because a singlet of a mass of 95 GeV with a small admixture with the doublet will predominantly decay into a bb̅ pair. On the other hand, the singlet couples directly to the η doublets. This induces, via loops, the decay into a pair of photons. This feature allows to explain in a natural way the diphoton rate at CMS. We can see that the features for both excesses have different origins, the bb̅ signal comes through the mixing with the doublet state while the diphoton signal is mainly driven by the singlet couplings. In order to accomodate both signals, the interplay between the singlet and doublet components of h_1 must be looked for. Finally, the signal strength for τ^+τ^- is exactly the same as that for bb̅ in Eq. (<ref>). In fact, our model predicts μ_bb=μ_ττ. This obviously precludes our model from explaining the CMS ditau excess in the same region of parameter space that explains the LEP bb̅ excess. In fact, as we will see below, the value of the mixing angle α required to explain the ditau excess is too large, already excluded by Higgs data. Thefore, the CMS ditau excess will be interpreted as an upper limit instead. § CONSTRAINTS The presence of a 95 GeV scalar that could lead to an explanation to the anomalies in the data is subject to different constraints. First of all there are some theoretical constraints that affect the parameters of our model. Such constraints involve mainly the parameters of the scalar potential. First of all, we demand all the quartic couplings in the potential to be below √(4π) in order to ensure perturbativity. Furthermore, we demand the potential to be bounded from below to ensure that we have a stable global minimum. This requirement is rather complicated in the presence of many scalar fields so we apply a copositivity requirement as described in the Appendix of Ref. <cit.>. Although being overconstraining, once the potential passes this requirement, it is guaranteed to be bounded from below. Furthermore, several experimental searches are sensitive to the spectrum of our singlet extension of the Scotogenic model. In that sense, the most important searches are the ones provided by colliders. Since the Higgs sector gets modified with respect to the one of the SM, one must ensure that all our predictions are in agreement with the existing collider measurements. We remind the reader that we have adopted a setup characterized by a light scalar around 95 GeV and a SM-like Higgs boson at 125 GeV. In order to take into account the bounds on the 95 GeV scalar we make use of the public code  <cit.>, integrated now in the public code  <cit.>. This code compares the potential signatures of such a scalar against BSM scalar searches performed at the LHC. A point in the parameter space of our model would be excluded if its signal rate for the most sensitive channel excedes the observed experimental limit at the 95% confidence level. Moreover, as a 125 GeV SM-like Higgs boson has been observed at the LHC, we ask the second scalar state to be in agreement with the experimental measurements of its signal rates using the public code  <cit.> that is also part of  <cit.>. This code constructs a χ^2 function using the different data from the measured cross sections at the LHC involving the measured 125 GeV Higgs boson. For that purpose we provide the code with the rescaled effective couplings for the different sensitive channels in our model. Once the χ^2 function is built for a point of the parameter space we compare it with the result of the fit for a 125.09 GeV SM-like Higgs boson with , χ^2_SM,125=152.49, and impose that the difference between the calculated χ^2_125 and χ^2_SM,125 is less than 2σ away from the LHC measurements in order to consider a point as experimentally allowed. Since we perform a two-dimensional analysis of the parameter space, our consideration for a point to be allowed becomes Δχ^2_125 = χ^2_125 - χ^2_SM,125≤ 6.18. Another point to have into consideration is the fact that the presence of an SU(2)_L scalar doublet can induce sizeable contributions to the electroweak precision observables. In particular, the oblique parameters S, T and U are generally affected by the presence of these particles, but these strongly depend on the scalar masses <cit.>. When the CP-even and CP-odd 2-odd neutral states are mostly degenerate, or equivalently when the entries of the λ_5 matrix are small, the T parameter imposes a restrictive bound over the difference in masses between the charged and neutral states, Δ m (η^+,η^0)=|m_η^+-m_η^0|≲ 140 GeV <cit.>. Charged particles are also heavily constrained by different searches at colliders. However, these searches assume specific decay modes. In our case the decay of the charged η^± scalar takes place via electroweak couplings as η^±→η^0 W^±. Since all decay chains must include the DM state, this eventually leads to missing transverse energy and leptons or jets in the final state. For that purpose we impose a conservative limit on the charged particles given by the LEP experiment of about m_η^±≳ 100 GeV. We impose this bound even if a detailed analysis could show that the limits might be weaker in some specific configurations, due to the decay modes and mass differences. Such a detailed analysis is out of the scope of this paper, that just aims at showing that our model can accommodate the 95 GeV excesses. The LHC has also performed searches looking for charged particles that decay into a neutral one and different objects <cit.>. Although the current limits on charged particles can reach high values of the mass, they are again strongly dependent on the mass splitting between the charged and neutral states, making the searches almost not sensitive for differences lower than Δ m (η^+,η^0)≲ 60  GeV. Furthermore, there are searches that look for charged particles that decay into neutral states that are close in mass, producing soft objects as final state <cit.>. These searches aim to cover the gap in mass values of the previous analyses for charged particles. Their sensitivity is maximized for mass differences of order Δ m (η^+,η^0)∼ 10 GeV, decreasing for increased values of Δ m (η^+,η^0), until it reaches Δ m (η^+,η^0)∼ 60 GeV, where the searches from Refs. <cit.> are sensitive. For that reason, we take the masses of the η doublet in such a way that fulfil both S, T and U parameters and the collider constraints. This requirement can be achieved naturally in this model according to Eq (<ref>) since Δ m (η^+,η^0) is driven by the couplings λ^ab_4 and λ^ab_5. The first of these matrices has entries tipically smaller than 1, while the second one is usually very small due to its link with neutrino masses. § NUMERICAL RESULTS We now show our numerical results. For the sake of simplicity, we will assume in the following that λ_3, μ_η and λ_η S are proportional to the n_η× n_η identity matrix ℐ_n_η, that is, X = X ℐ_n_η, with X = λ_3, μ_η, λ_η S. Fig. <ref> shows our results for the h_1 decay width and branching ratios into several final states. This figure has been made with the specific choice n_η = 2, fixing also λ_3 = 0.55, λ_η S v_S = 500 GeV and μ_η = 1500 GeV. The left-hand side of this figure shows contours of BR(h_1 →γγ) in the m_η - sinα plane. One can see that BR(h_1 →γγ) decreases with m_η, as expected, and gets enhanced for low values of sinα. In fact, the branching ratio into the diphoton final state can be of order 1 for very low values of sinα. This behavior is also illustrated on the right-hand panel of the figure, which shows the dependence of the different BR(h_1 → XX) on sinα for a fixed m_η = 200 GeV. The enhancement is caused by the strong suppression of all the other channels, which have negligible branching ratios for low values of sinα. This is simply due to the fact that the h_1 ≈ S decay into SM states can only take place via singlet-doublet mixing. It is important to notice that for values of sinα≳ 0.05 the branching ratio to a bb̅ pair becomes predominant favouring the LEP signal as was explained in Sec. <ref>. Fig. <ref> displays different examples that prove that our model can easily fit the CMS diphoton excess. In order to obtain these solutions we have assumed the coupling of the η scalars to the h_1 and h_2 scalars to be λ_η S v_S = 500 GeV and λ_3 = 0.6, respectively, and we also fixed the value of μ_η in the three figures to 500 GeV (purple regions), 1 TeV (yellow regions), and 2 TeV (red regions). Then, we found the (m_η,sinα) pairs that can reproduce μ_γγ^ CMS. Furthermore, we also vary the number of η generations in the model and assume degenerate η doublets. The upper left plot of Fig. <ref> represents the solution for only one generation of η doublets. We can see that the m_η range in which our model explains the CMS diphoton excess depends on the values of μ_η. For example, for the lowest value of the coupling, μ_η=500 GeV, the mass of the charged η scalars must be around m_η∼ 100 GeV to compensate the low value of the parameter, whereas for larger values of the coupling the η masses can reach ∼ 180 GeV. In the upper right plot of Fig. <ref> we can see the case of 2 generations of η that have the same mass m_η_1=m_η_2=m_η. With two generations, the diphoton rate increases and, for this reason, the CMS diphoton excess is explained for greater values of the η mass. Something similar happens in the case of 3 generations, shown in the lower pannel of Fig. <ref>. In this case, the m_η value required to accommodate the CMS excess can be as high as 300 GeV when the μ_η couplings are of the order of 2 TeV. It is important to note that an explanation for the CMS diphoton excess can be found for small values of sinα, in the ∼ 0.02 ballpark. This may be surprising at first, since such low values of the singlet-doublet mixing angle strongly suppress the production of h_1 at the LHC. However, the existence of the low α region is due to the abovementioned increase in BR(h_1 →γγ), see Fig. <ref>, which compensates for the reduction in the production cross section. In fact, one can estimate a lower limit on sinα, below which h_1 cannot fit the CMS diphoton signal strength because the required branching ratio into γγ would be larger than 1. The production cross section for a SM Higgs that decays into a pair of photons at √(s)= 13 TeV is approximately σ_H× (H→γγ)∼ 0.125 pb <cit.>. Then, assuming the hypothetical scenario with (h_1 →γγ)→ 1[It is important to note that this limit is just hypothetical. Once it is reached then the singlet cannot be produced in the LHC.] one finds the limit sinα≳ 0.0215 for the central value of μ^ CMS_γγ and the range sinα≳ [0.027-0.017] taking the 1 σ region. This is precisely what determines the low α region observed in Fig. <ref>. Given that some regions of the m_η-sinα plane considered in Fig. <ref> have small sinα values, one may wonder about the decay width of h_1. We note that h_1 must decay promptly for our explanation of the CMS diphoton excess to work. We explore this in Fig. <ref>, which shows contours of cτ_h_1 in the m_η - sinα plane for a variant of our model featuring 2 generations of η doublets. We see that h_1 has a short decay length, well below that regarded as prompt, even for small α angles. Again, the reason is the enhancement of the diphoton decay channel. Let us now consider the LEP bb̅ excess in combination with the previously discussed CMS diphoton excess. One can see in Fig. <ref> the region of the (m_η,sinα) plane where both excesses can be explained. As discussed in Sec. <ref>, the charged η scalars not only affect the h_1 diphoton rate, but also modify the one for h_2, alreadymeasured at the LHC. This implies limits from Higgs data, displayed in this figure by the dark gray area, which is excluded at 95% C.L.. One should notice that the LEP excess can be explained in a region of parameter space that lies on a high value of sinα and is mostly excluded by Higgs data. However, there is still a portion of the allowed parameter space where both excesses are explained. As expected, this portion involves lighter charged η scalars when fewer generations are considered. With the specific values chosen in this figure for the λ_Sη v_S, λ_3 and μ_η parameters, the required η masses range from ∼ 130 GeV to ∼ 220 GeV. Finally, we note that the dark gray area in Fig. <ref> depends very strongly on the λ_3 value. This can be easily understood by inspecting Eq. (<ref>). For some values of λ_3, the g_h_2 ηη coupling becomes 𝒪(1) and excludes most of the parameter space due to Higgs data. However, one can choose specific values of λ_3 that induce a cancellation in the g_h_2 ηη coupling and make the constraints from Higgs data less stringent. Our choice λ_3=0.6 is an example of this. Alternatively, since the LEP bb̅ excess is not very significant from a statistical point of view, one can interpret it as an upper limit on the e^+e^-→ Z h_95→ Zbb̅ cross section. In this case, we conclude that the restrictions imposed by the LEP search in this channel are compatible with the areas where our model can explain the diphoton excess in CMS. The case of the CMS ditau excess is similar. The value of sinα that would be required to explain this excess is quite large, above the current limit. The minimum value of the mixing angle in order to explain this signal would be sinα_min∼ 0.7. However, as there is only one search by CMS at still low luminosity and there are no more searches, we consider this excess as still not significant. We can again interpret it as an upper bound on the pp→ h_95→τ^+τ^- cross section. In this case, we conclude again that the region of parameter space where the CMS diphoton excess is explained respects this bound. § DISCUSSION Once shown that our scenario can accommodate the 95 GeV anomalies, let us comment on some other aspects of the model that have been ignored in our previous discussion. This is the case of neutrino oscillation data. As already explained in Sec. <ref>, neutrinos acquire non-zero masses via loops involving the 2-odd states N_n and η_a. Therefore, the resulting neutrino mass matrix depends on their masses, as well as on the λ_5 quartics and the y Yukawa couplings <cit.>. For specific values of n_N and n_η, the y Yukawa couplings can be readily written in terms of the parameters measured in oscillation experiments using a Casas-Ibarra parametrization <cit.> adapted to the Scotogenic model <cit.>. However, we note that the y Yukawa couplings do not play any role in the 95 GeV collider phenomenology. The Scotogenic variant discussed here also contains a DM candidate. In this family of models one usually has two options: fermion (N_1) or scalar (η_1^0) DM. However, in order to accommodate the 95 GeV anomalies we require relatively light η doublets, with masses m_η≲ 300 GeV for n_η≤ 3. [The explanation of the diphoton excess involves the charged η states, not the neutral ones considered here. However, the mass splitting between charged and neutral components is small, since it is controlled by electroweak symmetry breaking effects.] This is too light to accommodate the DM relic density determined by the Planck collaboration <cit.>. In fact, the scalar DM scenario resembles the Inert Doublet Model <cit.>, which is known to fully account for the observed relic density for DM masses in the 500-700 GeV range <cit.>. In contrast, m_η≲ 300 GeV leads to underabundant DM, hence requiring an additional DM component. We should also note that we consider more than one generation of η doublets. This variation of the Scotogenic model deserves further investigation, since it may lead to novel possibilities in scenarios with scalar DM. Alternatively, we may consider scenarios with fermion DM. This candidate is known to be potentially problematic due to existing tension between the DM relic density (which requires large y Yukawas) and contraints from lepton flavor violating observables (which require small y Yukawas), see for instance <cit.>. Two interesting scenarios emerge: * m_N_1≪ m_η_1^0. If the DM particle N_1 is much lighter than the η states (for instance, m_N_1∼ 100 GeV and m_η_1∼ 300 GeV), the y Yukawa parameters must be fine-tuned to suppress the contributions to μ-e flavor violating processes, such as μ→ e γ, while being compatible with neutrino oscillation data. Although tuned, this scenario is possible. It would be characterized at the LHC by the pair production (due to the 2 symmetry) of η states which subsequently decay into the invisible N_1 and leptons: η_1^0 → N_1 ν and η_1^±→ N_1 ℓ^±. This scenario is constrained by existing searches for sleptons, which would have a very similar phenomenology in both R-parity conserving and violating supersymmetry (see for instance <cit.>). Furthermore, a light N_1 may also contribute to the invisible decay of h_1 (if m_N_1≤ m_h_1/2) and/or h_2 (if m_N_1≤ m_h_2/2). In fact, the h_1 → N_1 N_1 invisible channel may easily dominate the h_1 decay width and preclude an explanation of the 95 GeV excess. * m_N_1≲ m_η_1^0. If the N_1 singlet is almost degenerate with the lightest η states, coannihilations become efficient and the DM relic density is more easily obtained. This enlarges the viable parameter space of the model and leads to novel signatures at the LHC. If the mass splitting Δ m = m_η_1^± - m_N_1 is small enough, the decay η_1^±→ N_1 ℓ^± may involve a long decay length, hence producing charged tracks at the detector. Finally, in parameter points with m_N_1≈ m_h_i/2 the annihilation cross section in the early universe gets enhanced due to resonant effects. In such cases one can achieve the correct DM relic density without invoking large couplings. This is a generic feature that does not affect the previous discussion. § SUMMARY AND CONCLUSION We have shown that a theoretically well-motivated and economical model can accommodate the diphoton excess hinted by CMS and ATLAS at 95 GeV as well as the hint for a b b̅ excess at similar energies by LEP. Our model is a minimal extension of the Scotogenic model and, besides addressing these collider anomalies, also provides a mechanism for the generation of neutrino masses and a testable dark matter candidate. We have allowed for variable numbers of generations of the Scotogenic states N and η and discussed our results for several choices of interest. Two CP-even (and 2-even) scalars: h_1 and h_2. The lightest of these states, h_1, is identified with h_95, the hypothetical scalar that is responsible for the γγ and b b̅ excesses at 95 GeV, while h_2 is the Higgs-like state discovered by the CMS and ATLAS collaborations in 2012. Our numerical analysis shows that the excesses can be accommodated in our model in a large fraction of the parameter space. The viable region is characterized by sizable μ_η trilinear couplings and leads to η scalars with masses below ∼ 300 GeV (∼ 180 GeV) for n_η = 3 (for n_η = 1). As expected, a larger number of η generations implies larger contributions to the h_1 diphoton coupling and enlarges the viable parameter space. Our scenario has a rich phenomenology, both at colliders and at low-energy experiments. The nature of the dark matter candidate and the particle spectrum determines the phenomenology at colliders. Depending on the mass differences between the lightest 2-odd fermion and 2-odd scalar, one expects monolepton events including missing energy or charged tracks at the LHC detectors. In addition, the usual lepton flavor violating signatures, common to most low-energy neutrino mass models, are expected too. Therefore, our setup not only is well motivated from a theoretical point of view, but also has interesting phenomenological implications. We conclude with a note of caution. Although the coincidence of several excesses, hinted by independent experiments at the same invariant mass, is highly intriguing, their relatively low statistical significance implies that more data is required to fully assess their relevance. We eagerly look forward to future updates on the 95 GeV excesses. § ACKNOWLEDGMENTS The authors are thankful to Thomas Biekötter and Sven Heinemeyer for useful discussions about the 95 GeV excesses and HiggsTools. This work has been supported by the Spanish grants PID2020-113775GB-I00 (AEI/10.13039/501100011033) and CIPROM/2021/054 (Generalitat Valenciana). VML is funded by grant María Zambrano UP2021-044 (ZA2021-081) funded by Ministerio de Universidades and “European Union - Next Generation EU/PRTR”. AV acknowledges financial support from MINECO through the Ramón y Cajal contract RYC2018-025795-I. § LOOP FUNCTIONS The loop functions involved in the calculation of Γ(h_i→γγ) are given by <cit.> A_0 =-[τ -f(τ)]/τ^2 , A_1/2 = 2[τ + (τ-1)f(τ)]/τ^2 , A_1 =-[2τ^2+3τ + 3(2τ -1)f(τ)]/τ^2 , where the function f(τ) is defined as f(τ)={[ arcsin^2 √(τ); τ≤ 1; ; -1/4[log1+√(1-τ^-1)/1-√(1-τ^-1)-iπ]^2; τ > 1 ]. utphys
http://arxiv.org/abs/2306.04440v2
20230607135845
Dual policy as self-model for planning
[ "Jaesung Yoo", "Fernanda de la Torre", "Guangyu Robert Yang" ]
cs.AI
[ "cs.AI", "cs.LG" ]
[ [ July 31, 2023 ================= Planning is a data efficient decision-making strategy where an agent selects candidate actions by exploring possible future states. To simulate future states when there is a high-dimensional action space, the knowledge of one's decision making strategy must be used to limit the number of actions to be explored. We refer to the model used to simulate one's decisions as the agent's self-model. While self-models are implicitly used widely in conjunction with world models to plan actions, it remains unclear how self-models should be designed. Inspired by current reinforcement learning approaches and neuroscience, we explore the benefits and limitations of using a distilled policy network as the self-model. In such dual-policy agents, a model-free policy and a distilled policy are used for model-free actions and planned actions, respectively. Our results on a ecologically relevant, parametric environment indicate that distilled policy network for self-model stabilizes training, has faster inference than using model-free policy, promotes better exploration, and could learn a comprehensive understanding of its own behaviors, at the cost of distilling a new network apart from the model-free policy. § INTRODUCTION Planning is an important decision making strategy that allows an agent to search better actions before interacting with the real environment <cit.>. During planning, it is essential to know which potential actions to evaluate when the action space is large, which is often the case in complex environments <cit.>. Even when the space of atomic actions may be small, the space of high-level actions, required for long-horizon planning, is typically much larger. The actions to explore in planning can be derived by using an approximation of its own decision making strategy: which actions an agent might take in this potential future state. We refer to the model that produces candidate actions during planning as a self model because it approximates potential actions that the agent (the self) may take in real states. If the action space is small, planning could go around by randomly sampling actions from the action space without a self-model. If the action space is large, self-models are necessary to limit the search space of actions <cit.>. While self models are widely used, mostly implicitly, together with world models <cit.> to plan actions, it is unclear how self models should be designed. The model-free RL policy, when available, is often used as the self model as it offers reasonable candidate actions to explore in possible future states <cit.>. While using the model-free policy as the self model is one of the simplest methods, we turn to neuroscience for insights on different self-model designs. Neuroscience studies on reinforcement learning and goal-directed behaviors suggest that the dorsolateral striatum (a part of basal ganglia) is involved in making model-free, habitual actions <cit.> whereas planned actions for goal-directed behaviors activate the prefrontal cortex <cit.>. This raises the question on the implications of having a separate network other than the model-free policy that is specialized for higher level cognitive functions such as planning <cit.>. One previous study designed two separate networks for planned and model-free decision making but the general implications of having dual policies for model-free actions and planning remains to be explored <cit.>. In this work, we investigate the possibility of having a self-model policy network that is only used for planning, apart from the model-free policy that learns to select actions that maximize future rewards. We explore the benefits and limitations of two distinct self model designs: Having the self model be (1) the model-free policy and (2) a separate distilled policy network. We refer to the agent that uses the model-free policy for planning as the shared policy agent, and the agent that uses the distilled policy for planning as the dual policy agent. Through an ecologically relevant, predator/prey environment that could elucidate the benefit of planning, we show that dual policy agent stabilizes training, has faster inference, explores the environment more, and learns a more comprehensive understanding of its own behaviors. § HYPOTHESIZED PROS AND CONS OF DIFFERENT SELF-MODEL IMPLEMENTATIONS The concept of self-modeling is not entirely new, with previous studies having explored related, yet distinct ideas. In robotics, an agent's observation encompasses both its internal body states and environmental conditions. To effectively plan interactions with its environment, the agent must predict potential changes to these internal states. Consequently, the agent's predictive ability regarding its body states forms a specialized component of the world model: a model dedicated to predicting observations. This model, which predicts the body states of the agent, has been referred to as the "self model" in recent studies <cit.>. In our study, we explore a distinct concept of the 'self' apart from prediction of one's body states. We define the self-model as a model that generates potential actions for exploration during planning (Figure <ref>). We term it the 'self-model' because it produces actions given an observation, essentially modeling the understanding of one's behavior when simulating future states. Since the self-model receives observations and provides actions, it has been common practice to use the model-free policy as the self-model to sample actions during planning <cit.>. Building upon the neurobiological insight that separate brain regions are responsible for habitual and goal-directed actions <cit.>, we explore the implications of having two distinct policy networks for model-free action and planning. In this dual-network setup, one network learns to predict the agent's actions by observing its past actions, similar to how a world model learns to forecast the future. We refer to this network as the distilled policy network. In contrast, the other network, the model-free policy, aims to predict actions that would most likely lead to rewarding trajectories. This dual policy agent uses its model-free policy for model-free actions, and its distilled policy as the self model for planning (Figure <ref>). This is in contrast to the shared policy agent which contains only the model-free policy and uses it for both action types (Figure <ref>). We hypothesize advantages and limitations for both the shared policy and dual policy agents. For the shared policy agent, the key benefit is that model-free policy could identify rewarding trajectories that lead to more rewarding planned actions, which could enhance overall performance. Additionally, there's no necessity for training an extra network, preventing the introduction of additional computational burden. On the other hand, the dual policy agent potentially offers a set of unique advantages. Firstly, having multiple networks may stabilize training for planning, as previous works with multiple networks such as ensemble models <cit.> and double Q networks <cit.> showed improved stability for model-free learning. Secondly, the inference time for planning can be faster if distilled policy is smaller than the model-free policy, as the dual policy allows flexible network design. Another advantage is the potential for better exploration suggested by a distilled policy with higher entropy <cit.>. Lastly, the agent's decision-making system might consist of circuits other than the model-free policy, which override in different conditions such as reflexive behaviors <cit.>. To plan accurately, a holistic understanding of one's decision-making strategy is required, which cannot be achieved solely by the model-free policy. This necessitates a separate model to oversee past behaviors across different states, further supporting the use of a dual policy agent. § EXPERIMENT SETUP §.§ Environment We evaluate our hypotheses in a survival setting where the agent must evade the predator while navigating toward the goal in a bounded box environment, similar to the environment proposed in <cit.>. This survival environment serves as a testbed to assess the advantages of planning over trial-and-error approaches, as the consequences of negative rewards can be catastrophic, such as death <cit.>. The action space of the environment is continuous which necessitates the self model as the search space for actions during planning is large. The environment is parameterized and we test our agents in varying environment settings. The details of the environment design is described in Supplementary Materials. For planning in shared policy agent and dual policy agent, Monte Carlo Tree Search (MCTS) <cit.> is performed. Future states are simulated using the world model o_t+1, r_t+1 = f(o_t, a_t) and the self model a_t, v_t = g(o_t) starting from the current observation, where o_t, r_t, a_t, and v_t refers to observation, reward, action, and value of the observation, respectively. At the initial observation for planning, four action candidates are randomly sampled with one action being the mean action from the action distribution generated from the self model. After the initial action candidates, mean action from the action distribution are always taken to explore possible future states, which results in a sparse tree search. This was to simulate most rewarding trajectories after randomly selecting initial actions for exploration. The MCTS was performed until maximum depth of four. To evaluate different trajectories, advantage of each trajectory was measured using generalized advantage estimation (GAE) <cit.> from the reward of future trajectories created by the world model. For details on planning and its hyperparameters, see Supplementary Materials. §.§ Networks In our setup, we utilize three neural networks: a model-free policy network a_t=π_θ(o_t) and v_t=V^π(o_t), a world model network o_t+1, r_t+1 = f_θ(o_t, a_t), and a distilled policy network a_t=π̃_θ(o_t) and v_t=V^π̃(o_t). The model-free policy network π_θ takes the current environmental observation o_t as input, which includes the spatial coordinates of the agent, the goal, and the predator. From this, it predicts the future direction dx, dy the agent should move towards. It also contains a separate value prediction network V^π that predicts the value v_t of the given observation. The world model network f_θ receives a history of past observations and the agent's actions, then predicts the next state of the environment and the associated reward. The input to this network includes the coordinates of the agent, goal, and predator from the current and previous timestep, as well as the distances between these entities. We have incorporated observations of two timesteps and distance values for the world model to facilitate learning, as our primary focus lies in the design of self models. The distilled policy network π̃_θ receives the current environmental observation o_t and predicts the action a_t that the agent previously took. Additionally, it predicts the value v_t of the current observation. In contrast to the model-free policy network, which uses separate networks for action and value prediction, the distilled policy network combines these predictions into a single network. This design was selected to examine performance of the distilled policy under harder conditions, as having separate specialized networks for action and value prediction would perform better. This design also makes the distilled policy smaller than the model-free policy network for the same hidden layer configuration, which may benefit inference time for planning. The details of network architecture is described in the Supplementary Materials. The model-free policy is trained using proximal policy optimization (PPO) algorithm <cit.> using experience rollouts every predefined interval (Minimize ℒ_PPO, Eq. <ref>). The advantage Â_t was computed using the GAE. The world model is trained to predict next step observation and reward by receiving current observation and action. MSE Loss ℒ_World model between the o_t+1, r_t+1 and õ_t+1, r̃_t+1 is used to optimize the network, where o_t+1, r_t+1 refers to real experience rollouts, and õ_t+1, r̃_t+1 refers to world model predictions (Eq. <ref>). The distilled policy is trained to predict past history of actions given an observation. To maximize the probability of past actions given the action probability distribution predicted from the distilled policy, the distilled policy minimizes the negative log-likelihood ℒ_Action (Eq. <ref>). The distilled policy also predicts the value v_t of the given state, and is optimized to minimize the knowledge distillation loss ℒ_KD <cit.> given past observation and value (Eq. <ref>). The T refers to the temperature parameter of the soft cross entropy loss. The hyperparameters for training are described in Supplementary Materials. ℒ_PPO = 𝔼[ min( p(a_t|π_θ(o_t))/p(a_t|π_θ_old(o_t))Â_t, clip( p(a_t|π_θ(o_t))/p(a_t|π_θ_old(o_t)) , 1 - ϵ, 1 + ϵ)Â_t) ] ℒ_World model = 𝔼[ ( õ_t+1 - o_t+1)^2 + ( r̃_t+1 - r_t+1)^2 ] ℒ_Action = 𝔼[ - log p(a_t| π̃_̃θ̃(o_t)) ] ℒ_KD = -𝔼[ T^2 ∑_i=1^K V^π(o_t) log V^π̃(o_t) ] § RESULTS §.§ Dual policy agent achieves similar performance to shared policy agent with better stability Three different agents are compared to asses the characteristics of different self model designs (Table <ref>). The simple agent only contains a model-free policy and only employs model-free actions. The simple agent acts as a lower bound to demonstrate the performance benefit of planning. The shared policy agent contains a model-free policy and the world model. The agent and employs both model-free action and planned actions where it uses its model-free policy as the self-model to sample candidate actions during planning. The switch between the two decision making strategies is determined randomly. The dual policy agent contains the world model, model-free policy, and the distilled policy. The agent uses its model-free policy for model-free actions, and its distilled policy as the self-model for sampling actions for planning. We measured the proportion of each outcome types to evaluate the performance of our agents. Figure <ref> reveals that the dual policy agent exhibits similar performance to the shared policy agent, suggesting that the distilled policy sufficiently learns actions that lead to rewarding trajectories. While the mean performance for the dual policy agent and the shared policy agent is similar for all three reward types, the dual policy agent shows smaller standard deviation which indicates more stable training compared to the shared policy agent. This could also be found from evaluation results, where the standard deviation of the dual policy agent were the smallest for all three reward types (Table <ref>). While the dual policy agent outperforms the shared policy agent, the difference wasn't statistically significant from our experiments (p=0.403, 0.778, 0.039 for success, death, and timeout, respectively. Two sample t-test). The simple agent showed the minimum performance among the agents which highlights the benefit of planning. We further report experiment results of our dual policy agent with different self model designs in the Supplementary Materials. §.§ Dual policy agent have faster planning inference with smaller distilled networks In the process of planning, actions are sampled from the self model for each future state that is being explored. Therefore, a self model with faster inference capability can accelerate the planning process. Given this, we sought to explore the relationship between various architectures of the model-free policy and the distilled policy within the framework of the dual policy agent as the number of parameters determines the amount of computation required for inference. We explored three difference architecture settings for the model-free policy and the distilled policy (Table <ref>). Our results showed a consistent performance enhancement with smaller distilled policies, irrespective of the size of the model-free policy (Figure <ref>). This suggests that smaller networks are more efficient at distilling the knowledge from the model-free policy. This finding is advantageous as it implies that the distilled policy can be designed to be smaller than the model-free policy, thereby offering not only a faster inference but also superior performance. In this environment, we did not observe any discernible pattern correlating the size of the model-free policy with performance. §.§ Dual policy enhances exploration The model-free policy often finds itself confined to a singular rewarding trajectory. Previous research employing dual policy as self-model for planning <cit.> suggested that one of the distilled policy's advantages is its facilitation of exploration in real environments, thereby accruing a diverse range of experience rollouts. To evaluate the influence of exploration on performance, we subjected our simple, shared policy, and dual policy agents to environments of varying map sizes. Larger maps inherently demand more exploration, providing a more challenging test of the agents' exploration capabilities. We found consistent results with the previous study, where the dual policy agent outperforms the shared policy agent drastically as the map size increases. The success proportion of dual policy agent outperforms the shared policy agent during training and evaluation episodes (Figure <ref>, Table <ref>). The timeout proportion of the dual policy agent is also significantly smaller than the shared policy agent which indicates the agent is able to navigate the goal in a more precise manner. The simple agent showed the minimum performance among the agents which highlights the benefit of planning again. §.§ Dual policy learns comprehensive understanding of the agent's behaviors Agents might possess parts of action policies that they cannot simulate for planning purposes, such as reflexive and other innate responses to emergency scenarios <cit.>. In biological organisms, these reflexive responses are often genetically hard-wired, and rely on separate circuitry from those for habitual, learned policy <cit.>. Actions are generated from either the model-free policy or the reflexive actions depending on whether the observation meets certain conditions such as deadly situations. For such observations, the actions proposed by the model-free policy may not align with the reflexive response, as the former generates rewarding actions while the latter is hard-wired. Therefore, future trajectories simulated using the model-free policy may be inaccurate, as they wouldn't account for the agent's reflexive responses to critical observations. This planning discrepancy could potentially be rectified by employing a distilled policy which learns from its past actions, thus gaining a comprehensive understanding of the agent's action policies including the reflexive behavior. In order to assess our hypothesis, we investigated the performance of three types of agents: simple, shared policy, and dual policy agents, all of which demonstrate reflexive behaviors. We introduced two forms of these reflexive responses: 'flight', where the agent retreats in the opposite direction, and 'freeze', where the agent remains still when the predator comes too close. While the 'flight' response is advantageous as it aids the agent in avoiding the predator, the 'freeze' response is detrimental in our environment as it prevents the agent from escaping the predator. This can be identified from the evaluation performance of the Simple agent with 'flight' behavior (Table <ref>), with 'freeze' behavior (Table <ref>), and without reflexive behavior from the previous section <ref> (Table <ref>). We incorporated both advantageous and detrimental behaviors to postulate that the dual policy agent's performance would only improve with the detrimental reflexive behavior, given that this behavior would deviate more from the model-free policy in comparison to the advantageous behavior. Our dual policy agent successfully reconciled the action discrepancies between the model-free policy and the reflexive behavior, particularly with the 'freeze' reflexive behavior where it significantly outperformed the shared policy agent (Figure <ref>, Table <ref>). However, it didn't offer a similar performance advantage with the 'flight' reflexive behavior (Figure <ref>, Table <ref>). These results suggest that the dual policy agent enhances performance by learning a more comprehensive action space, which includes both model-free policy actions and non-simulatable reflexive behaviors. § DISCUSSION We explored a range of benefits of employing a distilled policy as a self-model for planning. First, we found that the dual policy approach with distillation stabilizes training, a finding analogous to the stability often seen employing ensemble approaches such as the double Q-learning <cit.>. Secondly, we found that the flexible architecture of the dual policy can accelerate planning, a process that inherently requires numerous policy inferences. Moreover, our dual policy framework enhances exploration capabilities, and effectively handles sub-optimal actions that are not feasible to simulate for planning, thereby increasing robustness. These empirical findings suggest a potential parallel with the neurobiological structures of the brain, which may also harbor separate networks for similar reasons - stabilization of training, speed of planning, enhancement of exploration, and planing with holistic understanding of one's behavior. Our work thus provides insights not only in the realm of reinforcement learning, but also in understanding the principles of cognitive systems. While our work offers promising results, it was validated within a single environment which presents a limit to the generalizability of our findings. Future work could extend this investigation to variety of environments to solidify the observed benefits and uncover any environment-specific characteristics. Moreover, there are other potential benefits of having a distilled policy as the self-model that needs to be explored. For instance, aspects such as temporal abstraction, a key concept in hierarchical reinforcement learning, could be investigated in the context of our dual policy agent. Additionally, the idea of using the distilled policy to retrain the model-free policy when it is impaired could be examined as a means of ensuring lesion stability. This could offer a robust mechanism for recovery under unexpected disruptions. Beyond the context of single agent behavior, the dual policy paradigm could also be explored in multi-agent settings in terms of 'theory of mind'. That is, understanding how an agent's self-model could help it predict and adapt to the behaviors of other agents. In summary, our findings represent an early step in the concept of self-models and their design, highlighting the potential it holds. This research was supported by the MOTIE (Ministry of Trade, Industry, and Energy) in Korea, under Human Resource Development Program for Industrial Innovation (Global) (P0017311) supervised by the Korea Institute for Advancement of Technology (KIAT). The manuscript was revised with the help of chatGPT. None of the experiment codes were generated from chatGPT. unsrt
http://arxiv.org/abs/2306.03267v1
20230605213555
Mathematical Foundations for Joining Only Knowing and Common Knowledge (Extended Version)
[ "Marcos Cramer", "Samuele Pollaci", "Bart Bogaerts" ]
cs.LO
[ "cs.LO" ]
Dual self-distillation of U-shaped networks for 3D medical image segmentation Soumyanil Banerjee1 Ming Dong1 Carri Glide-Hurst2 2023 June 05 ============================================================================= Common knowledge and only knowing capture two intuitive and natural notions that have proven to be useful in a variety of settings, for example to reason about coordination or agreement between agents, or to analyse the knowledge of knowledge-based agents. While these two epistemic operators have been extensively studied in isolation, the approaches made to encode their complex interplay failed to capture some essential properties of only knowing. We propose a novel solution by defining a notion of for countable ordinals , which approximates not only the worlds that an agent deems possible, but also those deemed impossible. This approach allows us to define a multi-agent epistemic logic with common knowledge and only knowing operators, and a three-valued model semantics for it. Moreover, we show that we only really need biworlds of depth at most ω^2+1. Based on this observation, we define a Kripke semantics on a canonical Kripke structure and show that this semantics coincides with the model semantics. Finally, we discuss issues arising when combining negative introspection or truthfulness with only knowing and show how positive introspection can be integrated into our logic. §.§ Note about Extended Version This is the extended version of the paper “Mathematical Foundations for Joining Only Knowing and Common Knowledge” accepted for KR 2023. Each of the additional lemmas and proofs is included at the relevant point in the text. § INTRODUCTION When developing intelligent agents, it is important that they can reason not just about the world they are placed in, but also about the knowledge or beliefs of other agents in their environment. Consider for instance a traffic situation where two cars meet at a crossing. When the driver of the first car observes their light is green, the typical action to be taken would be to keep on driving. The reason for this is two-fold: on the one hand, the driver knows the traffic regulations and the fact that the other car should stop. But on the other hand, the driver also knows (or believes) that the other driver also knows the regulations (and will likely respect them). It is this knowledge about the other driver's knowledge that allows the first driver to conclude that it is safe to continue. The formal study of knowledge and how to reason correctly about it has a long history in knowledge representation, dating back at least to the 1960s <cit.>. This study becomes particularly interesting when, as in our example, multiple agents are involved. Next to the standard knowledge operator K, we are concerned with two epistemic operators, namely common knowledge and only knowing, and their intricate interplay. We say that a statement ϕ is common knowledge among a group G of agents (and denote this C_Gϕ) if each agent in G knows ϕ, and also knows that every agent in G knows ϕ, and knows that everyone in G knows that everyone in G knows ϕ, and so on. This operator is useful, for instance, for reasoning about coordination or agreement between agents. We say that an agent A only knows a statement ϕ (and denote this Aϕ) if the agent knows ϕ (denoted Aϕ) and moreover everything they know follows from ϕ (so whenever Aψ holds, it must be the case that ϕ entails ψ). This operator is useful for instance when we consider that knowledge-based agents do not know anything except for what follows from their knowledge base and we might want to reason about their knowledge as well. There have been many papers studying these operators in isolation <cit.> and some authors have even studied the combination of the two <cit.>. However, we will argue that there are some essential properties of only knowing that none of these approaches captures. We will start with the good news: it is easy to develop a Kripke semantics for a logic with these two operators: given a Kripke structure K with set of worlds W and an accessibility relations R_A for every agent A, the semantics for C_G and A is given by * K, w C_G ϕ if K,w'ϕ for all w' reachable from w with edges in ⋃_A∈ G R_A, and * K, wAϕ if for every world w'∈ W, K,w'ϕ if and only if (w,w')∈ R_A. Intuitively, the if-part in the definition of the semantics of A states that A knows ϕ (in all worlds A deems possible, ϕ holds), and the only-if-part ensures that the agent doesn't know anything else (all worlds in which ϕ holds are indeed deemed possible by the agent in question). If this is so easy, then where's the catch, one might wonder. Well, the problem lies in the choice of the set W of worlds. The question we tackle is: can we construct a set of worlds W that is rich enough such that, for instance, A⊤ really means that that A only knows tautologies in the language (that agent A “knows nothing”)? In other words, can we construct a canonical Kripke structure for this logic? A naive first attempt at doing so would be as follows. Given a propositional vocabulary , a world w consists of * a (classical propositional) interpretation over (the objective interpretation of w), and * for each agent A∈, a set of worlds A^w. However, the attentive reader might have noticed that this circular definition breaks the basic rules of set theory (a world is defined to consist of, among others, a set of worlds). We are not the first to observe this issue. The most common approach to alleviate it is to approximate the knowledge of agents up to a certain level and defining k+1-worlds as consisting, among others, of a set A^w of k-worlds for each agent A. There are two challenges with this approach. The first is that as soon as we add common knowledge to our language, there is a strong need for these worlds to be infinitely deep. One way to achieve this is to consider an infinite precision-increasing sequence of k-worlds, as is done for instance by <cit.>, or by <cit.>. However, that in itself does not suffice: we show that to evaluate certain formulas there is a need to have worlds that are even deeper than this. The second is that in such approximations, no matter how deep one goes, there is never enough information to conclude that this is “all we know”: it might always be that by making the approximation more precise, more knowledge at some later level comes in. This brings us to the main contribution of this paper: a solution to the above problem. First we will define a notion of μ, where μ can be any countable ordinal (thereby resolving the first challenge), and where the “bi” in biworld stands for the fact that we do not just approximate the set of worlds an agent deems possible, but also the set of worlds an agent deems impossible. Intuitively, with each μ+1 w, we will associate a set A^w of μ representing the set of μ of which the agent deems some extension possible, and a set A̅^w of μ of which the agent deems some extension impossible. This immediately allows us to see when all of agent A's knowledge is captured by such a biworld: this is precisely when A^w∩A̅^w is empty. For limit ordinals, the situation is more complex, and this makes the construction of biworlds highly technical and mathematical. Due to space limitations, proofs are not included in this paper, but all propositions and lemmas are carefully proven in a technical report accompanying this paper <cit.>. This transfinite construction of our biworlds is given in Section <ref>; Section <ref> then shows several properties of them, essentially showing that they are well-behaved, in a precise sense. In Section <ref> we define our logic of common knowledge and only knowing as a simple multi-agent epistemic logic extended with the operators A and C_G as described above. We show that the biworlds possess enough information so that the formulas in our logic can be evaluated in them. More specifically, we define a three-valued model semantics for our logic: given a formula ϕ and a biworld w of any depth, we define what ϕ^w is. This can be true, false, or unknown, where the last case represents the fact that the biworld is not sufficiently “deep” to evaluate the formula. Moreover, we show that we do not need arbitrarily deep biworlds: if a biworld has depth at least ω^2+1, then every formula will evaluate in it either to true or to false. Inspired by this observation, we are able to define our canonical Kripke structure: the worlds, are precisely the ω^2+1-biworlds that are completed, which is a technical term to denote the fact that it identifies for every other biworld of any other level whether or not the agent believes it is possible, that it characterizes complete knowledge. The accessibility relations can directly be obtained from the definition of the worlds. We then proceed to show that the semantics obtained from this canonical Kripke-structure actually coincides with the valuation we started from. Finally, this allows us to prove several desirable properties the resulting logic satisfies (see Theorem <ref> for an extensive list), including the following two: * For any ϕ and ψ, if ϕψ, then AϕAψ. * For any ϕ, Aϕ. The first property states precisely that whenever Aϕ holds, all formulas not entailed by ϕ are not known (and in fact we also have that all properties entailed by ϕ are known by A). The second property states that for any formula ϕ, there is a world in which Aϕ holds. In fact, our results are even stronger than this: we show that there is a unique state-of-mind of agent A in which they know precisely ϕ. These two properties of the A operator, while quite simple, are — to the best of our knowledge — not satisfied by any other paper combining common knowledge and only knowing. While developing our worlds, and our logic, we do not enforce any properties that are often associated to knowledge. For instance, our worlds do not guarantee that our agents are truthful or introspective. The main focus of the paper is on how to create a semantic structure that allows defining a rich enough set of worlds. However, once this semantic structure is in place, it is possible to use it to define a logic that satisfies such properties as well. To illustrate this, we show in Section <ref> how we can hard-code into the logic the fact that agents are positively introspective, which in fact means that it is common knowledge that all agents satisfy this property. We also discuss the possibility of adding truthfulness and negative introspection, highlighting a problem that arises from combining them with only knowing. We use this problem to uncover a major mistake in previous work on combining only knowing and common knowledge. Finally, in <ref> we discuss related work and explain why previous approaches could not achieve the two properties set out above. We conclude in <ref>. § CONSTRUCTION OF BIWORLDS In this section, we define the concept of biworlds. Intuitively, a biworld consists of (1) an objective interpretation, and (2) for each agent two sets of biworlds and , where the former set contains all the biworlds the agent deems possible, and the latter the biworlds the agent deems impossible. As explained before, using this definition brings us into set-theoretic problems. To resolve this issue, we define the notion of s, for all countable ordinals . Intuitively, a does not describe the complete epistemic state of the different agents, but only their belief about the world up to a certain depth. For some formulas, having a certain depth will suffice to evaluate the formula. For instance, a 1-biworld will suffice to determine whether or not an agent knows p, but might not suffice to determine whether an agent knows that some other agent knows p. Moreover, we will later show that for our logic, the ordinal ω^2+1 suffices, in the sense that every set of formulas is either true or false in every ω^2+1-biworld. While we only need the theory of biworlds up to ω^2+1, we will still develop the theory in more generality for all countable ordinals. From now on, when we say ordinal, we mean countable ordinal. We repeat this assumption only in the proofs in which we require it. If is a successor ordinal +1, then a associates with each agent two sets of -biworlds, where represents the set of s that can be extended (in depth; for some sufficiently large ordinal) to a biworld the agent deems possible, and represents the set of s that can be extended to a biworld the agent deems impossible. Clearly, a natural condition will then be that the union of these two sets is the set of all s. If there is any biworld in the intersection of and , this biworld must be extendible both to a biworld the agent deems possible, and to a biworld the agent deems impossible. If for all agents A, the intersection of and is empty, this means that the information in fully specifies which biworlds the agents deem possible and which ones impossible. Once this is fully specified, there is only one way to extend to a biworld of a greater depth. When there are multiple ways in which a -biworld can be extended to a +1-biworld, then we call an incompleted biworld. Since a biworld in the intersection of and must be extendible (for some sufficiently large depth) both to a biworld the agent deems possible and to a biworld the agent deems impossible, it must be incompleted. We start with the auxiliary notion of a , which approximates the more complex notion of a while leaving out some of the structural conditions that we impose on s. We define s, restrictions of prebiworlds and a precision order on the prebiworlds through simultaneous recursion, but for readability, we split it into Definitions <ref>, <ref> and <ref>. Let be an ordinal. We define the set of -prebiworlds over a propositional vocabulary Σ, and a set of agents by transfinite induction: * A 0 is an interpretation over Σ. We define = and ()=0. * where is an interpretation over Σ, and for each ∈, and are sets of s. We define ()= +1 and =. * A w for a limit ordinal is an increasing transfinite sequence (_)_< of prebiworlds, i.e. for each <, _ is an and for each ≤ it must be that __. We define ()= and =_0. For each ordinal , we denote by _p^ the set of s, and we call the integer d(w) the depth of w. If ∈_p^ with a limit ordinal, then for each α<, we denote by ()_α the which is the element in position in the transfinite sequence represented by . The following definition captures what it means to restrict a prebiworld of depth to a prebiworld of smaller depth . Assume is a and ≤. The restriction of to is the defined as follows: * If =0, then =. * If is a limit ordinal, then = ()_< * If ='+1, we distinguish two cases: * If is a limit ordinal, then = ()_. * If is a successor ordinal ='+1, then =, = {''|'∈}, and = {''|'∈}. The precision order on the prebiworlds is based on the notion of restriction: If is a and an with ≤, we say that is less precise than (and denote this ) if =. If for some prebiworlds and , we call a restriction of and an extention of . Now we define the set of incompleted prebiworlds and the set of biworlds by simultaneous transfinite recursion. For better readability, we separate this simultaneous definition into Definitions <ref> and <ref> and explain afterwards why it is a successful definition. A is incompleted if there exists two distinct +1s _1 and _2 such that _1,_2 . A prebiworld is called completed if it is not incompleted. A -biworld is a such that one of the following conditions holds: [(a)] * =0; * ='+1 is a successor ordinal, and for each agent ∈ the following hold: [(1)] * the union ∪ is the set of all '-biworlds; * for each ∈∩, is incompleted; * is a limit ordinal, and _ is an for each <. For each ordinal , the set of s is denoted as ^. We often call a 0 an objective world. Since the definition of incompleted prebiworlds of depth depends on the existence of biworlds of depth +1, an attentive reader may be worried whether the simultaneous definition of incompleted prebiworlds and biworlds is really a successful definition. We therefore now explain how this definition is to be understood. For this Figure <ref> depicts the dependencies between the notions of incompleted prebiworlds and biworlds at levels and +1. Here, a green arrow from one notion to another indicates that the second notion positively dependends on the first, will include more objects when the first notion includes more objects. A red arrow, on the other hand, indicates a negative dependency, so that the second notion will include fewer objects when the first notion includes more objects. For example, the green arrow from biworld at level μ+1 to incompleted at level μ indicates that determining additional biworlds at level μ+1 can lead to determining a prebiworld w at level μ to be incompleted, as one of the newly determined biworlds at level μ+1 may imply there is more than one way of extending w to a +1-biworld. The arrow from biworld at level μ to biworld at level μ+1 is red, because given a μ+1-prebiworld w, determining more biworlds at level μ can show that the union ∪ is not the set of all -biworlds, that w is not a biworld. Now the simultaneous definition of incompleted prebiworlds and biworlds is a transfinite recursion over μ, where at each level μ of the induction, the notions of an incompleted μ-prebiworld and a μ+1-biworld are defined simultaneously by choosing the minimal set of incompleted μ-prebiworlds and μ+1-biworlds that satisfy Definitions <ref> and <ref>(b). Such a minimal set exists, because the definitions of incompleted μ-prebiworld and μ+1-biworld only depend positively on each other. Here the negative dependency of μ+1-biworld on μ-biworld is not a problem, because at this stage in the transfinite recursion over μ, the set of μ-biworlds has already been determined: If μ is a successor ordinal μ'+1, it has been determined in the previous step μ' of the transfinite recursion. If μ is a limit ordinal, it has been determined by Definition <ref>(c) and the fact that for every α < μ, the set of α-biworlds has already been determined. If μ=0, the set of μ-biworlds just coincides with the set of μ-prebiworlds by Definition <ref>(a). We consider one of the simplest settings, namely we suppose to have just one agent , and one propositional variable p in the vocabulary Σ. The set ^0 of 0s coincides with the set of all 0s by <ref>, and it is equal to ^0={{p},{∅}}. The set ^1 already counts considerably more elements, eighteen to be precise: nine with {p} as objective world, and other nine with {∅}. This follows from the limitations imposed by Item <ref> in Definition <ref> of biworlds[In the single-agent setting, |^1|=n3^n, where n=|^0|.]. For the sake of conciseness, we omit the explicit description of all the 1s but one: v_1:=({p}, {{p}}, {{p}, {∅}}). Since {p} is the objective world of v_1, p is true in v_1. Moreover, the sets v_1 and v_1 provide some information on the beliefs of in v_1. Recall that v_1 (resp. v_1) is the set of biworlds that have an extension deems possible (resp. impossible). Since {p} is the only biworld in v_1, p is true in any biworld deems possible. As we will see later, this means that knows p. On the contrary, since v_1 contains both {p} and {∅}, p is true in some of the biworlds deems impossible, and false in others. Starting with the objective world v_0:={p} and continuing with the biworld v_1, we can inductively build an ω v:=(v_α)_α<ω as follows v_α:={p} if α=0 (v_0, {v_α'}, ^α') if α=α'+1. Since for all successor ordinals α=α'+1<ω we have v_α∩v_α={v_α'}, in order to prove that v is an ω, it suffices to show that for all α<ω, v_α is incompleted, which can be done by an inductive proof.. We achieve this with a proof by induction: * α=0. The 1s u^1_0:= ({p}, {{p}}, {{∅}}) u^2_0:= ({p}, {{∅}}, {{p}}) are clearly distinct 1s extending v_0. Hence v_0 is incompleted. * α=α'+1. By induction hypothesis, v_α' is incompleted, it has two distinct α extensions u^1_α' and u^2_α'. It is easy to see that u^1_α:= ({p}, {u^1_α'}, ^α∖{u^1_α'}) u^2_α:= ({p}, {u^2_α'}, ^α∖{u^2_α'}) are two distinct α+1s extending v_α. Hence, v_a is incompleted as desired. We conclude that v is a ω. Moreover, v can be shown to beis completed. Let us explain why this holds. Taking into account the second item of Definition <ref> and the incompletedness of each v_α, one can easily see that v has exactly two ω+1s extending it, namely: u^1:=({p}, {v}, ^ω∖{v}), u^2:=({p}, {v}, ^ω). Since u^1∪u^1=^ω and u^1∩u^1=∅, u^1 is an ω+1. However, u^2 is a biworld only if v is incompleted, by <ref> of <ref>. Hence, u^2 is a biworld if and only if v is incompleted. Since we construct the concepts of incompleted ω and ω+1 by simultaneous induction by choosing the minimal sets, u^2 is not a biworld and v is not incompleted. We now dive deeper in certain features of (pre)biworlds with limit ordinal depth. Note that for successor ordinals , and for any , each agent is equipped with two sets of prebiworlds. For limit ordinals , however, this is not the case. The following definition aims at retrieving a similar concept for a prebiworld with limit ordinal depth. Given a limit ordinal , a and an agent ∈, we define the following sets := {∈_p^|∀<: ()_∈()_+1} := {∈_p^|∀<: (v)_∈A̅^(w)_+1} It is clear that if in the definition above is a biworld, then and are sets of biworlds. It is important to notice that the sets defined in <ref> do not carry the exact same meaning as the successor-ordinal counterpart: if v∈ for some λ-(pre)biworld , then in , for each approximation v_α (α<λ) of v, the agent deems some extension of v_α as possible. Analogously, if v∈, then in , for each approximation v_α (α<λ) of v, the agent believes some extension of v_α is impossible. § PROPERTIES OF BIWORLDS In this section, we show that the formal definitions stated in Section <ref> behave well and that they indeed correspond to the intuitive ideas introduced above. First, we present two propositions regarding the notion of restriction, which show that the induced relation is a non-strict partial order. Second, we provide some additional insight into the sets defined in <ref>. Finally, we focus on certain properties concerning biworlds. In particular, we will show the following fundamental facts in <ref>: I changed compactenum to compactdesc. Saves us a line. But... I'm not super happy with it so feel free to revert if we have a spare lien Restrictability: the restriction of a biworld is a biworld. Monotonicity of completedness: an extension of a completed biworld is a completed biworld. Completability: all biworlds have a a completed extension. Completedness at successor ordinals: a +1 w is completed if and only if ∩=∅ for all agents . Given Definitions <ref> and <ref>, the first two properties are sensible to have. Completability will be fundamental in Section <ref> and it is clearly desirable: a complete biworld characterizes complete knowledge, providing a full description of the epistemic state of the agents. The last property provides a simple characterization of what it means to be completed for biworlds of successor ordinal depth. While the four properties listed above may seem natural and straightforward, several intermediate results are required to prove them to hold. As anticipated, we start by showing that Definition <ref> defines a non-strict partial order on the set of prebiworlds. Let be a and let ≤≤. Then = and =. We first prove = by induction on . * =0. Then == by Definitions <ref> and <ref>. * ='+1 is a successor ordinal. By induction hypothesis, we have that ''=' for each '-prebiworld '. By Definition <ref>, we conclude this case. * is a limit ordinal. Then =()_<, and we need to show =_. We proceed by induction on . * If =0 or is a successor ordinal, then we conclude by Definition <ref>. * If is a limit ordinal, then =()_<=(_)_<=_, where the equalities hold by Definition <ref>, induction hypothesis on , and Definition <ref>, respectively. Now we show that = by induction on . * =. This is a consequence of the already proven first claim of this proposition. * ='+1 is a successor ordinal and <. We proceed by induction on . * =0. Then =, =, and it is easy to see that =. * ='+1 is a successor ordinal. Then ===. Moreover, if ='+1 is a successor ordinal, then for each agent ∈ ={''|'∈} ={'''|'∈} ={''|'∈}=, where the third equality holds by induction hypothesis in . Analogously, we have =. If is a limit ordinal, then =(γ)_γ<=. * is a limit ordinal. Then =(γ)_γ<=(γ)_γ<=, where the second equality holds by the induction hypothesis on . * is a limit ordinal. We can proceed by nested inductions on and , as we did above. The proof is exactly the same as for being a successor ordinal, except for the case where both and are successor ordinals. In this case we have =_=_=, where the second equality holds because __. The relation is a non-strict partial order (a reflexive, antisymmetric and transitive relation). The induced strict order <_p is a well-founded order. Before proceeding to the properties of biworlds, we focus on the sets of <ref>. Analogously to what happens in <ref> of <ref> for biworlds of successor ordinal depth, the union of and is the whole set of biworlds of that depth. Let be a limit ordinal, and be a . For all ∈, we have ∪=^. Let ∈. Clearly, ∪⊆^. To show the other inclusion, let '∈^∖↑. Then there exists '< such that (')_'∉()_'+1. Since and ' are biworlds, we must have (')_'∈()_'+1. By the precision order of limit biworlds, for all <', we get (')_∈()_+1, and for all >' we get (')_∉()_+1. In particular, this implies (')_∈()_+1 for all >'. Hence, '∈ as desired. It is interesting to notice that the second part of <ref> of <ref> does not hold in general for the sets and , their intersection might contain completed biworlds (see <ref>). Nevertheless, this intersection tells us something about the completedness of , as stated in the following proposition. Let be a limit ordinal and a . If ∩=∅ for all ∈, then is completed. Suppose w' w is a λ+1-biworld. Then w = (w)_0. Furthermore,for each < and for each ∈, we have _+1='+1 and _+1='+1. Hence, for each ∈, Aw'⊆ and Aw'⊆. But since ∩=∅ and '∪'=^, the only way this can hold is when Aw' = and Aw' =. This fully determines w', so there cannot be two distinct λ+1-biworlds w',w” w. Consider the completed ω v defined in <ref>. It is easy to see that v={v} and v=^ω. Hence, we get v∪v=^ω and v∩v={v}, which agrees with <ref> and shows that the converse of <ref> does not hold. Moreover, notice that v∩v contains a completed biworld. We present one last result regarding the sets defined in Definition <ref>, that turns out to be fundamental to prove the lemmas leading to <ref>. Let be a limit ordinal, w be a , ∈ be an agent, and < be an ordinal. If ∈(w)_+1, then there exists '∈ such that '. If ∈(w)_+1, then there exists '∈ such that '. Recall that we only consider countable ordinals. We prove the part of the statement with ∈(w)_+1, and the second part for ∈(w)_+1 is analogous. By the precision order of limit biworlds, for each ordered pair (β, γ) of ordinals with <γ< and for each u∈_+1, there exists u'∈_γ+1 such that u' u. So by the axiom of choice, there exists a family of functions f_A^γ W_A^γ→_γ+1 for γ<, where W_A^γ:=⋃_β<γ_β+1, such that for each u∈ W_A^γ, we have f_A^γ(u) u. Since is a countable limit ordinal, we can choose a strictly increasing sequence (ρ_n)_n<ω of ordinals which has limit . W.l.o.g. we assume ρ_0>. Then, we define ṽ∈^ as follows: ṽ_:= v if ≤ f_A^ρ_0(v) if <≤ρ_0 f_A^ρ_n+1(ṽ_ρ_n) if ρ_n<≤ρ_n+1. By the definition of the functions f_A^ρ_i and of restriction of biworlds, it is clear that ṽ∈ and ṽ v. The remaining definitions capture the four properties listed at the beginning of this section, but only at limited depth. They are used to inductively prove <ref>. The first property is , whenever we take the restriction of a biworld, the result is also a biworld. A priori, it is only known that this is a prebiworld. Let μ be an ordinal. We say that [μ] (R_) if for every , and every α≤μ, α is again a biworld. The second property is , which states that any extension of a completed biworld is itself completed. Intuitively, we expect this to hold: if one biworld has complete knowledge, refining it can only result in structures that still have complete knowledge. Let μ be an ordinal. We say that [μ] (MoC_) if for each and each ordinal α≤μ, whenever |_α is completed, so is . The third useful property is . It states that any biworld can always be extended to a completed one. This is in line with or view of incompleted biworlds as approixmations of some completed biworlds. Let μ be an ordinal. We say that [μ] (C_) if for every α<μ, and every , there exists a completed ' with '. The fourth and last property is a characterization of completedness for a biworld whose depth is a successor ordinal. Let μ be a successor ordinal. We say that [μ] (CC_) if for each , is completed if and only if for each agent A, A^w∩A̅^w = ∅. We say that biworlds are well-behaved until μ if restrictability, completability and monotonicity of completedness hold at all ordinals smaller than μ and the completedness condition holds at all successor ordinals smaller than μ, where μ denotes the largest limit ordinal smaller or equal to μ. In order to prove that every biworld enjoys the four properties listed at the beginning of this section, we need to prove <ref> below. The following theorem formally states the fundamental properties of biworlds presented at the beginning of this section. The proof of it requires several intermediate technical lemmas, which we omit due to space limitations, whose interplay is sketched in Figure <ref>. Nevertheless, we provide a sketch of the proof. Let be a +1. If ∩=∅ for all ∈, then is completed. By contradiction, suppose there exist +2s _1 and _2, such that _1≠_2 and _1,_2. Then, there exists ∈ such that either _1≠_2 or _1≠_2. W.l.o.g., we can assume that _1≠_2 and that there exists u∈_1 such that u∉_2. Since _2 and u are biworlds, u must be in _2. Then, by definition of restriction, we have that u∈_1+1∩_2+1=∩=∅, a contradiction. Suppose holds at all ordinals up to and including μ. Suppose α≤μ and w is a completed α. Then there is at most one w' w. Moreover, if there is a w' w, w' is completed. Fix α. We prove this lemma by induction over μ. * μ = α. Trivial. * μ = μ'+1 is a successor ordinal greater than α. The induction hypothesis is that there is at most one ' v w, and any such v is completed. Suppose for a contradiction that there are two distinct s w',w” w. Then w'|_μ' and w”|_μ' are s, so by the induction hypothesis w'|_μ' = w”|_μ' and w'|_μ' is complete, which contradicts Definition <ref>. So there is at most one μ w' w. Now assume that there is some μ w' w. Since w is completed and w'|_α = w, the monotonicity of completedness for μ implies that w is completed. * μ is a limit ordinal. By induction hypothesis, for every β<μ with β≥α, there is at most one β w_β w, and any such w_β is completed. Suppose for a contradiction that there are two distinct μ sw', w” w. Then for some ordinal β with α < β < μ, w”|_β≠ w'|_β, contradicting the induction hypothesis (since w”|_β, w'|_β w). So there is at most one μ-biworld w. Assume that w' w is such a μ-biworld. Since w'|_α = w and w is completed, the monotonicity of completedness for μ implies that w is completed. If monotonicity of completedness and completability hold at all ordinals strictly smaller than μ, then restrictability holds at μ. We prove this by induction over μ. The induction hypothesis _0 is that restrictability holds at every ordinal β < μ. * μ = 0. Trivial. * μ=μ'+1 is a successor ordinal. Let w be a μ-biworld. We need to show that for every α≤μ, w|_α is a biworld. We prove this by induction over α. The induction hypothesis _1 is that for any α' < α, w|_α' is a biworld. * α = 0. w|_0 is a biworld by Definition <ref>. * α=α'+1 is a successor ordinal. First we show that for each agent ∈, α∩α does not contain completed biworlds. For this, suppose v ∈w|_α∩w|_α. By Definition <ref>, there are worlds w' ∈w and w”∈w such that w'|_α = w”|_α = v. Suppose for a contradiction that v is completed. Since monotonicity of completedness holds for all ordinals ≤μ', Lemma <ref> implies that w' = w” and w' is completed. But w'=”∈w∩w=∅ and ' being completed contradict the fact that is a biworld. What remains to be shown is that w|_α∪w|_α = ^α'. ⊆: Assume v ∈w|_α∪w|_α. Without loss of generality, assume v ∈w|_α. Since μ is a successor ordinal, w|_α = {w'|_α'| w' ∈w}. So there is some w' ∈w such that v = w'|_α'. By Definition <ref>, the fact that w' ∈w implies that w' is a biworld. By induction hypothesis _0, restrictability holds at μ'. Since w' is a μ'-biworld and v = w'|_α', it follows that v is a biworld, as required. ⊇: Assume v ∈^α', v is an α'-biworld. Since completability holds at μ', there is a μ'-biworld v' ≥ v. Since w is a μ-biworld, v' ∈w∪w. Without loss of generality, assume v' ∈w. Then v = V'|_α∈w|_α, as required. * α is a limit ordinal. By induction hypothesis _1, we have that for any α' < α, w|_α' is a biworld. By Definition <ref>, w|_α = (w|_α')_α'<α. So by Definition <ref>, w|_α is a biworld, as required. * μ is a limit ordinal. This case directly follows from Definition <ref>. If completability holds at μ and the completedness condition holds at all successor ordinals smaller than μ, then monotonicity of completedness holds at μ. Fix α. We need to prove that for each μ≥α and each μ-biworld such that |_α is completed, is also completed. We prove this lemma by induction over μ. * μ = α. Trivial. * μ = μ'+1 is a successor ordinal. Let w be a μ such that w|_α is completed. By induction hypothesis, w|_μ' is completed. Since completability holds at μ, there is a completed μ w' w|_μ'. Since w,w' w|_μ' and w|_μ' is completed, w=w'. So w is completed, as required. * μ is a limit ordinal. Let w be a μ-biworld such that w|_α is completed. By induction hypothesis, w|_α+1 is completed. Since the completedness condition holds at α+1, w|_α+1∩w|_α+1 = ∅. Now we can show by a proof by contradiction that A ↑ w ∩A̅↑ w = ∅: If v ∈ A ↑ w ∩A̅↑ w, then v|_α∈w|_α+1∩w|_α+1, a contradiction. Thus by <ref>, w is completed, as required. Let be an ordinal. If restrictability holds at each ≤, then . First, we show that for any < and for each -biworld there exists a completed +1 '. We proceed by induction on . * =0. We take ':=(, (^0)_∈, (∅)_∈). Clearly, ' is a 1 extending . By Lemma <ref>, ' is a completed 1. * ='+1 is a successor ordinal. Fix ∈. Since is a biworld, every biworld in ∩ is incompleted, there exist distinct _1,_2. By the axiom of choice, we can define a function f_A∩→^ by sending each in ∩ to an extension f_A()∈^. Let S_A:={f_A()|∈∩}. Notice that, since biworlds in ∩ are incompleted, each ' ∈∩ has an extension in S_A and an extension not in S_A. Then we take ':=(, (()∖ S_A)_∈, ((^∖())∪ S_A)_∈), where () := { u ∈^α|uα'∈}. It is easy to see that ' is an +1, and by Lemma <ref>, ' is completed. It remains to show that '. For each agent ∈, we need to prove two equalities: * '=. Let ∈'. Then, by Defintion <ref>, there exists '∈'⊆() such that '. Hence, =''∈, as desired. Finally, for ⊆', let ∈. By induction hypothesis, there exists an -biworld ' such that '. If ∉∩, then by uniqueness of restriction '∈()∖ S_A. Hence ='∈', as desired. If ∈∩, then there exists ”∈(), such that ” and ”≠ f_A(). Hence, =”'∈', as desired. * '=. Let ∈'. Then, by Defintion <ref>, there exists '∈' such that '. If '∈ S_A, then =''∈, as desired. Otherwise, '∈^∖() and =''∉. Since restrictability holds at level , ='' is a biworld. This together with the facts that is a biworld and that ∉ implies that ∈, as desired. Hence, '⊆. For the other inclusion, let ∈. If ∈, then =f_A()'∈', as desired. If ∉, then by the induction hypothesis there exists '∈^∖() such that '. Hence, =''∈'. * is a limit ordinal. Fix A ∈. By induction hypothesis, for each ∈∩ and for each <, there exists a completed +1 _+1^∗()_. Using this notation, we define some sets of biworlds to construct the desired completed biworld ': S_0^A:= ∖ S^A_1:= {∈∩| [ ∃δ< s.t. for every; s.t. δ < <,; _+1^∗∈_+2 ]}. Now define w' := (,(S^A_0 ∪ S^A_1)_A ∈,(^∖ (S^A_0 ∪ S^A_1))_A ∈). Clearly w' is a completed +1-biworld. In order to show that w w', one needs to prove that for all δ <, (w)_δ w'. For this it is enough to show that for all successor ordinals δ+1 <, (w)_δ+1 w'. In more detail, we need to show that for every δ < and every ∈, ^()_δ+1 ={()_δ|∈ S^_0 ∪ S^_1} and ()_δ+1 ={()_δ|∈^∖ (S^_0 ∪ S^_1)}. So fix δ< and fix ∈. We prove the two required statements one after the other: * ^()_δ+1 ={()_δ|∈ S^_0 ∪ S^_1}. If v ∈ S^A_0∪ S^A_1, then v ∈, so (v)_δ∈ A^(w)_δ+1. Additionally, we need to show that for any u ∈ A^(w)_δ+1, there is an -biworld v u such that v ∈ S^A_0 ∪ S^A_1. So fix u ∈ A^(w)_δ+1. By Proposition <ref>, there exists ũ∈ such that ũ u. If ũ∈ S_0^A or ũ∈ S_1^A, then ũ is the desired extension of u. If ũ∉ S_0^A∪ S_1^A, then ũ∈∩ and for all δ there exists such that δ<< and ũ^∗_+1∉_+2. Then we can apply Proposition <ref> to ũ^∗_+1 to obtain an ũ'ũ^∗_+1 u. Since ũ^∗_+1∉_+2, ũ' is in S_0^A, as desired. * ()_δ+1 ={()_δ|∈^∖ (S^_0 ∪ S^_1)}. By definition of S^_0, for any v ∈^∖ (S^A_0 ∪ S^A_1), we have v ∈, (v)_δ∈A̅^(w)_δ+1. Additionally, we need to show that for any u ∈A̅^(w)_δ+1, there is an v u such that v ∈^∖ (S^A_0 ∪ S^A_1), v ∉ S^A_0 and v ∉ S^A_1. So fix u ∈A̅^(w)_δ+1. We follow a technique similar to the one adopted for case <ref> above. By the precision order of limit biworlds, for each <γ< and for each ∈_+1 there exists '∈_γ+1 such that '. Hence, for each η<, we can define a function f_A̅^η W_A̅^η→_η+1, where W_A̅^η:=⋃_γ<η_γ+1, such that for each ∈ W_A̅^η, f_A̅^η()∈_η+1 and f_A̅^η(). Since is a countable limit ordinal, we can choose a strictly increasing sequence (ρ_n)_n<ω of ordinals which has limit . W.l.o.g. we assume ρ_0>δ. Then, we define ũ∈^ as follows: ũ_:= u if ≤δ f_A̅^ρ_0(u) if δ<≤ρ_0 f_A̅^ρ_n+1(ũ_ρ_n) if ρ_n<≤ρ_n+1. Clearly, ũ∈ and ũ u. Hence, ũ∉ S_0^A. If ũ∉ S_1^A, then ũ is the desired extension of u. If ũ∈ S_1^A, then there exists < such that ũ^∗_+1∈_+2. Notice that, since ũ^∗_+1 is completed, we have ũ^∗_+1∈_+2∖_+2. Then we can apply to ũ^∗_+1 a construction analogous to (<ref>), to obtain an ũ'ũ^∗_+1 u. Clearly, ũ' is in ∖, hence not in S_0^A∪ S_1^A, as desired. So far, we have shown that for any < and for each there exists a completed +1 '. We want to show that for any < and any , there exists a completed -biworld '. We prove this by induction on : * ='+1 is a successor ordinal. In this case we include the initial step of the induction, namely '=. Let be an -biworld. By induction hypothesis, there exists a '-biworld ” (for the initial step we have ”:=). By the first part of the proof, there exists a completed -biworld '”. * is a limit ordinal. Let be an . By induction hypothesis, for each ordered pair of ordinals (γ, δ) such that ≤γ<δ< there exists a function f_γ^δ^γ→^δ such that, for each ∈^γ, f_γ^δ() and f_γ^δ() is completed. By the construction of the completed biworld ' in the first part of the proof of this Lemma, not only f_γ^δ() is completed, but also f_γ^δ()∩f_γ^δ()=∅ for any agent ∈. Since is a countable limit ordinal, we can choose a strictly increasing sequence of ordinals (ρ_n)_n<ω which has limit . W.l.o.g. we assume < ρ_0. Then, we define '∈^ as follows: '_:= if ≤ f_^ρ_0() if <≤ρ_0 f_ρ_n^ρ_n+1('_ρ_n) if ρ_n<≤ρ_n+1. Clearly, '. Since f_^ρ_0()∩f_^ρ_0()=∅ for any agent ∈, we have that '∩'=∅ for all ∈. By Proposition <ref>, ' is completed, as desired. If completability holds at +2, then [μ+1]. Let be a +1. By Lemma <ref>, if for each ∈, ∩=∅, then is completed. To prove the other direction needed for the completedness condition, assume there exist _∗∈ and ∈_∗∩_∗. By definition of biworld, must be incompleted, there exist _1, _2 ∈^+1 such that _1, _2. We will show that we have two distinct +2s _1≠_2 such that _1, _2, is not completed. By completability at +2, there exists a completed +2 _1. Note that for i ∈{1,2}, v_i is in precisely one of the two sets _1_∗ and _1_∗. Now we define _2 to be the +2 with _2:=_1, for each ∈∖{_∗}, _2:=_1 and _2:=_1, and the sets _2_∗ and _2_∗ defined as follows: _2_∗:= _1_∗∖{_2} if _1, _2∈_1_∗ _1_∗∪{_2} if _1, _2∈_1_∗ _1_∗∪{_2}∖{_1} if _1 ∈_1_∗ , _2 ∈_1_∗ _1_∗∪{_1}∖{_2} if _1 ∈_1_∗, _2 ∈_1_∗ _2_∗:= _1_∗∪{_2} if _1, _2∈_1_∗ _1_∗∖{_2}) if _1, _2∈_1_∗ _1_∗∪{_1}∖{_2} if _1 ∈_1_∗, _2 ∈_1_∗ _1_∗∪{_2}∖{_1}) if _1 ∈_1_∗, _2 ∈_1_∗ It is easy to see that _2, as _1 and _2 are extensions of ∈_∗∩_∗. Since _1≠_2, is incompleted, as required. For every ordinal μ, biworlds are well-behaved until μ. We prove this by induction over μ: * μ = 0: Trivial. * μ = μ'+1 is a successor ordinal. The induction hypothesis is that biworlds are well-behaved until μ', that restrictability, completability and monotonicity of completedness hold at all ordinals smaller than μ' and the completedness condition holds at all successor ordinals smaller than μ'. Now Lemma <ref> implies that restrictability holds at μ'. This together with Lemma <ref> implies that completability holds at μ'. It now follows from Lemma <ref> that monotonicity of completedness holds at μ'. Note that since μ = μ', the induction hypothesis implies that the completedness condition holds at all successor ordinals smaller than μ. Thus biworlds are well-behaved until μ, as required. * μ is a limit ordinal. The induction hypothesis is that for every α<μ, biworlds are well-behaved until α, that restrictability, completability and monotonicity of completedness hold at all ordinals smaller than α and the completedness condition holds at all successor ordinals smaller than α. Since this holds for all α<μ, it directly follows that restrictability, completability and monotonicity of completedness hold at all ordinals smaller than μ. Note that μ=μ. So in order to conclude that biworlds are well-behaved until μ, we still need to show that the completedness condition holds at all successor ordinals smaller than μ. Let β +1 < μ. Now the induction hypothesis with α = β+3 implies that completability holds at β+2. So by Lemma <ref>, the completedness condition holds at β + 1, as required. Now the following theorem directly follows from Lemma <ref>. Let α≤'≤<β be ordinals, let be a , and ' be a '. The following hold: * The restriction is an . * If ' is completed and ', then is completed. * There exists a completed β such that . * If is a successor ordinal, then is completed if and only if for each ∈, ∩=∅. The proof can be done for all the four statements simultaneously by transfinite induction over . We denote by μ the largest limit ordinal smaller or equal to an ordinal μ. We need some intermediate concepts to express the statements up to a certain depth: * (R_) For every , and every α≤μ, α is a biworld. * (MoC_) For each and each α≤μ, if |_α is completed, then is completed. * (C_) For every α<μ, and every u, there exists a completed u' with u' u. * (CC_) For each with a successor, is completed if and only if for each agent A, A^w∩A̅^w = ∅. <ref> provides a visual representation of some of the relations contained in the omitted lemmas. The successor ordinal case can be easily shown using the relations in <ref>, and observing that pl(μ)=pl(μ+1). For the limit ordinal case, we need an additional implication not represented in <ref> for the sake of simplicity, namely C_μ+2CC_μ+1. <ref> directly implies an important corollary: A is completed if and only if there exists exactly one +1 such that . In this case, also is completed. If a is completed, then it has at most one extension at +1, by definition of completedness. By <ref> of <ref> with α=+1, has at least one extension, proving one direction. Conversely, if there is exactly one +1 such that , then cannot be incompleted, by definition. Notice that has to be completed by <ref> of <ref>. § THE LOGIC In this section we define the syntax and semantics of the logic that allows to speak about knowledge, common knowledge and only knowing. More specifically, we first define a three-valued model semantics, where the value of a formula is either true (), false (), or unknown (). We consider two orders on these truth values: the precision order given by and , and the truth order given by . We will write v^-1 for the inverse of the truth value v, defined by ^-1=, ^-1=, and ^-1=. We will show that our three-valued semantics is precision-monotonic, in the sense that more precise biworlds give more precise results. Moreover, we will show that in a biworld of depth at least ω^2, every formula evaluates to either true or false. This fact prompts us to define an alternative two-valued semantics. In more detail, since dealing with biworlds with a limit ordinal depth may be complicated and counter-intuitive, we will move our focus to the smallest (for the sake of simplicity) successor ordinal at which biworlds evaluate all formulas (and all sets of formulas) as true or false, namely ω^2+1. In addition, we restrict to completed ω^2+1s: by Item <ref> in Theorem <ref>, they have the intuitive property that A^w and A̅^w are disjoint, which corresponds to the intuition that these two sets represent the biworlds deemed possible and the biworlds deemed impossible by A. This motivates an alternative semantic characterization of our logic through a canonical Kripke structure consisting of completed ω^2+1-biworlds[For any μ≥ω^2, all the results would hold if we considered a canonical Kripke structure consisting of completed . Taking all completed μ for all ordinals μ would also work, but some uniqueness results would be lost, as we would have many worlds representing the same object.], which we show to coincide with the three-valued semantics on the relevant biworlds. Furthermore, we show that is semantically well-behaved. In Section <ref>, we will show that no previously proposed semantic approach leads to a logic that is semantically well-behaved in the way specified in this section. To say that an agent A only knows ϕ can be viewed as a conjunction of the statement that A knows ϕ, denoted Aϕ, and the statement that A knows at most ϕ, denoted M_A ϕ. Given a set G of agents, we write E_G ϕ for the statement that every agent in G knows ϕ, and we write C_G ϕ for the statement that ϕ is common knowledge within the set G of agents. The language extends propositional logic with these modal operators as follows: We define the language by structural induction with the standard recursive rules of propositional logic, augmented with: A(ψ) ∈ if ψ∈ and A∈ M_A(ψ) ∈ if ψ∈ and A∈ E_G(ψ) ∈ if ψ∈ and G⊆ C_G(ψ) ∈ if ψ∈ and G⊆ We use Aϕ as syntactic sugar for Aϕ M_A ϕ. In the introduction, we have already discussed what an intuitive Kripke semantics for Aϕ would be. Adapting these ideas to the representation of Aϕ as Aϕ M_A ϕ, we can easily see that the correct way to define the Kripke semantics for M_A ϕ is as follows: * K, w M_A ϕ if for every world w'∈ W such that K,w'ϕ, we have (w,w')∈ R_A. In order to explain why this is a good semantic characterization of “knowing at most ϕ”, we will sketch a proof that the only way in which M_A ϕ and Aψ can both be true is when ϕ entails ψ: Assume M_Aϕ and Aψ are true in a world w. We want to show that ϕ entails ψ, that ψ is true in every world w' in which ϕ is true. Assume w' is a world in which ϕ is true. But the assumption that M_Aϕ is true in w together with the above definition of the Kripke semantics for M_Aϕ implies that (w,w')∈ R_A. This together with the assumption that Aψ is true in w implies that ψ is true in w', as required. In preparation for the upcoming discussion of a three-valued semantics for , note that if we write (M_A ϕ)^K,w = and (M_A ϕ)^K,w = for K, w M_A ϕ and K, w M_A ϕ respectively, the above characterization of the semantics of M_A ϕ is equivalent to the following: (M_A ϕ)^K,w = _≤_t{(ϕ^K,w')^-1| w'∉ R_A^w} Let us now turn to the three-valued valuation of formulas of . All parts of this definition are precisely what one would expect when applying a Kleene-style three-valued semantic approach to logics with a common knowledge operator, taking into account the above rewording of the Kripke semantics of M_A ϕ. Given a formula ϕ∈ and a w, we define the three-valued valuation function ϕ^w by induction on and the structure of ϕ: P^w = if P∈w and P^w = otherwise (ϕψ)^w =glb_≤_t (ϕ^w,ψ^w) (ϕ)^w =(ϕ^w)^-1 (Aϕ)^w = if =0 _≤_t{ϕ^w'| w'∈ A^w} if ='+1 _≤_p{(Aφ)^(w)_'|'<} if is a limit ordinal (M_Aϕ)^w = if =0 _≤_t{(ϕ^w')^-1| w'∈A̅^w} if ='+1 _≤_p{(M_Aφ)^(w)_'|'<} if is a limit ordinal (E_G ϕ)^w = _≤_t{(Aϕ)^w| A∈ G} (C_G ϕ)^w = _≤_t{(E_G^kϕ)^w| k≥ 1} where we define E_G^k inductively as E_G^0=, E_G^k+1=E_G(E_G^k). We say a w satisfies a formula ϕ (notation: wϕ) if ϕ^w=. A w satisfies, or is a model of, a theory if it satisfies all formulas in that theory. We say a w resolves a formula ϕ if ϕ^w≠. My problem is: to evaluate this formula, we only look at the objective interpretations that are nested at most two levels deep... Where: in order to obtain those levels we sometimes need to make a step to smaller succesor ordinals. But in essence, we never look deeper. Seems so strange that it would work... I'll try to work out a different example, a bit simpler. Namely, I'll try to show that the following formula is satisfiable: (M_A) AAp I only have trivial konwledge (first conjunct) and yet I know that I know p (non-trivial knowledge). Intuitively, this should be unsatisfiable. If I can construct a All the 0-biworlds are p:= {p} and p̅ := ∅. All 1-wolds are p_1 = (p, ∅, {p,p̅}), p_2 = (p, {p}, {p,p̅}), p_3 = (p, {p}, {p̅}), p_4 = (p, {p̅}, {p,p̅}), p_5 = (p, {p̅}, {p}), p_6 = (p, {p,p̅}, {p,p̅}), p_7 = (p, {p,p̅}, {p̅}), p_8 = (p, {p,p̅}, {p}), p_9 = (p, {p,p̅}, ∅), p̅_1 = (p̅, ∅, {p,p̅}), p̅_2 = (p̅, {p}, {p,p̅}), p̅_3 = (p̅, {p}, {p̅}), p̅_4 = (p̅, {p̅}, {p,p̅}), p̅_5 = (p̅, {p̅}, {p}), p̅_6 = (p̅, {p,p̅}, {p,p̅}), p̅_7 = (p̅, {p,p̅}, {p̅}), p̅_8 = (p̅, {p,p̅}, {p}), p̅_9 = (p̅, {p,p̅}, ∅), When is M_A true in a 2-biworld w? Observe that in any world w' ^w'= and hence (^w')^-1=. Hence for any world w, if A̅^w ∅, then M_A =. This is suspicious! The following proposition asserts that the three-valued valuation is -monotonic: For every pair w, w' of biworlds such that w≤_p w' and every formula ∈, we have ^w^w'. We prove this proposition by induction over ϕ. We distinguish the following cases: * ϕ=P. Since '='0=d()0=0=, we have ϕ^=ϕ^=ϕ^'=ϕ^'. * ϕ=ψ_1 ψ_2. Then ϕ^w = _≤_t(ψ_1^,ψ_2^) _≤_t(ψ_1^',ψ_2^') = ϕ^', where the central inequality comes from the induction hypothesis and _≤_t being -monotone. * ϕ=ψ. Then ϕ^w = (ψ^w)^-1 (ψ^')^-1 = ϕ^', where the central inequality comes from the induction hypothesis and (·)^-1 being -monotone. * ϕ=Aψ. We prove this case by an induction on d('): * d(')=d(). Then ='. * d(') is a successor ordinal. Here we distinguish three cases: * d() = 0. Then ϕ^ = ϕ^'. * d() is a successor ordinal. By the definition of restriction, every biworld in has an extension in ' and the restriction of every biworld in ' is in . Hence, by the induction hypothesis on ϕ, and _≤_t being -monotone, we have ϕ^=_≤_t{ψ^|∈}_≤_t{ψ^|∈'}=ϕ^'. * d() is a limit ordinal. Then by the limit ordinal case of the definition of (Aϕ)^w, there is a successor ordinal μ' < d() such that (Aψ)^w|_μ' = (Aψ)^w. Then by case <ref>, ϕ^ϕ^'. * d(') is a limit ordinal. Then by the limit ordinal case of the definition of (Aϕ)^w, there is an ordinal μ' < d(') such that (Aψ)^w'|_μ' = (Aψ)^w'. By the induction hypothesis, ϕ^ (Aψ)^w'|_μ' = ϕ^w'. * ϕ=M_A ψ. The proof is analogous to the one for the case ϕ=Aψ. * ϕ=E_G ψ. Then ϕ^w = (E_G ψ)^w = _≤_t{(Aψ)^w| A∈ G} _≤_t{(Aψ)^'| A∈ G} = (E_G ψ)^' = ϕ^', where the central inequality comes from the induction hypothesis and _≤_t being -monotone. * ϕ=C_G ψ. Then ϕ^ = (C_G ψ)^w = _≤_t{(E_G^kψ)^w| k∈ℕ} _≤_t{(E_G^kψ)^'| k∈ℕ} = (C_G ψ)^' = ϕ^', where the central inequality comes from the induction hypothesis and _≤_t being -monotone. The notion of the modal depth of a formula allows us to specify conditions for a formula to be resolved by a biworld: The modal depth (ϕ) of a formula ϕ∈ is defined by inductively as follows: * (P)=0 for every propositional atom P * (ϕ) = (ϕ) * (ϕψ) = max((ϕ),(ψ)) * (Aϕ)= (ϕ)+1 * (M_Aϕ)=(ϕ)+1 * (E_Gϕ)=(ϕ)+1 * (C_Gϕ)=(ϕ)+ω, which is the smallest limit ordinal greater than (ϕ). Note that the modal depth of any formula in is less than ω^2. Every formula of a given modal depth is resolved at any biworld of at least this depth: If w is a and ϕ∈ is a formula such that (ϕ) ≤, then w resolves ϕ. We prove this proposition by a simultaneous induction over μ and the complexity of ϕ. The induction hypothesis is that if either α < μ and ψ is any formula or or α = μ and ψ is a subformula of ϕ, then any α-biworld resolves ψ. For the purpose of this induction (and some similar induction proofs later), we consider Aψ to be a subformula of E_G ψ whenever A ∈ G, and we consider E_G^k ψ to be a subformula of C_G ψ for any k ≥ 1. It is enough to show that this induction hypothesis together with the assumptions that w is a and (ϕ) ≤ implies that w resolves ϕ. For this, we distinguish the following cases: * ϕ=P. Clearly, ϕ^≠. * ϕ=ψ_1 ψ_2. By <ref>, we have (ψ_1),(ψ_2)≤. Hence, by induction hypothesis, (ψ_1 ψ_2)^= _≤_t(ψ_1^, ψ_2^)≠. * ϕ=ψ. By <ref>, we have (ψ)=(ϕ)≤. By induction hypothesis, ψ^≠, so (ψ)^=(ψ^)^-1≠. * ϕ=Aψ. Then (ϕ) ≠ 0, so μ≠ 0. We distinguish two cases: * μ is a successor ordinal. By the induction hypothesis, any world in A^w resolves ψ. This together with the successor ordinal case of the definition of (Aϕ)^w implies that (Aψ)^≠, as required. * μ is a limit ordinal. Since (ϕ) is a successor ordinal, (ϕ) < μ, so by induction hypothesis, w|_(ϕ) resolves ϕ. Then by Proposition <ref>, w resolves ϕ, as required. * ϕ=M_A ψ. The proof is analogous to the one for the case ϕ=Aψ. * ϕ=E_G ψ. By the induction hypothesis, (Aψ)^≠ for any A ∈ G. Hence, ϕ^≠, as desired. * ϕ=C_G ψ. By <ref>, we have (E_G^kψ)=(ψ)+k≤(ψ)+ω=(ϕ)≤. By induction hypothesis, (E_G^kψ)^≠ for all k∈. Hence, ϕ^≠, as desired. As explained at the beginning of this section, by <ref> of Theorem <ref>, a μ+1 precisely captures the knowledge of every agent A iff w is completed. Hence, Theorem <ref> combined with the fact that the modal depth of any formula in is less than ω^2 motivates focusing on completed ω^2+1, as ω^2+1 is the first successor ordinal greater than the modal depth of all formulas. A world is a completed ω^2+1. The ω^2+1-completed Kripke structure := (U,(R_A)_A∈) is the Kripke structure whose underlying world set U is the set of all worlds, and whose accessibility relations R_A are given by R_A = { (w,w') ∈ U^2 | w'|_ω^2∈ A^w }. Instead of (w,w') ∈ R_A, we sometimes write w R_A w'. Interpreting formulas in this canonical Kripke structure in the standard way (with the above specified semantics for M_A ϕ) amounts to a two-valued valuation of . Given a world w, we write R_A^w for the set of all worlds R_A-accessible from w. The following proposition is important for seeing that the above definition of the accessibility relation is indeed sensible: Let ϕ∈, w∈ U^ω^2, and ∈. The formula ϕ is true in every world in if and only if it is true in every biworld in . I found it important to put the proposition here, but for this it was necessary to separate the proposition from its proof in the technical report. I thought hard about this decision and think that it's the best we can do here. Before we can prove this proposition, we first need to introduce some notation and prove a lemma. Given a world w, we write R_A^w for the set of all worlds accessible from w. In other words, R_A^w = ∩ U^ω^2 Note that by post-completedness of it follows that R_A^w ={v | v∈ U^ω^2 and ∃ < ω^2 s.t. (v)_∉A̅^(w)_+1}. We will often make use of this alternative characterization of R_A^w. In order to establish the relationship beteween the two-valued and the three-valued valuation, we first need to prove the following lemma that for w' >_p w establishes links between R_A^w' on the one hand and A^w and A̅^w on the other hand: Suppose is a successor ordinal less than ω^2, w is a , and w' is a world such that w' ≥_p w. Then [(a)] * for every -1-biworld v ∈ A^w, there is a world v' ∈ R_A^w' such that v' ≥_p v; * for every world v' ∈ R_A^w', there is a -1-biworld v ∈ A^w such that v' ≥_p v; * for every -1-biworld v ∈A̅^w, there is a world v' ∉ R_A^w' such that v' ≥_p v; * for every world v' ∉ R_A^w', there is a -1-biworld v ∈A̅^w such that v' ≥_p v. [(a)] * Suppose v ∈ A^w. We distinguish two cases: [1.] * ∈A̅^w. In this case, v ∈ A^w ∩A̅^w, so by <ref> of <ref>, w is not completed. Since w' is post-completed, there is a ' < ω^2 such that (w')_' is completed. By <ref> of <ref>, we can choose ' to be a successor ordinal. Then A^(w')_'∩A̅^(w')_' = ∅. Since (w')_ = w is not completed, <ref> of <ref> implies that ' >. So (w')_'≥_p w, so by the definition of the precision relation, A^(w')_' = A^w. Since v ∈ A^w, there exists a v^* v such that v^* ∈ A^(w')_'. Since A^(w')_'∩A̅^(w')_' = ∅, we get that v^* ∉A̅^(w')_'. By Corollary <ref>, there is a world v' v^*. Then v' v and (v')_' = v^* ∉A̅^(w')_'. So by the alternative characterization of R_A^w', it follows that v' ∈ R_A^w', as required. * v ∉A̅^w. By Corollary <ref>, there is a world v' such that v' ≥_p v. Since v = (v')_-1, it follows by the alternative characterization of R_A^w' that v' ∈ R_A^w'. * Let v' ∈ R_A^w'. Then, there is some ' < ω^2 such that (v')_'∉A̅^(w')_'+1. Define v := (v')_-1. Clearly v' v. So it is now enough to prove that v ∈ A^w. For this we distinguish two cases: [1.] * -1 ≤'. By the definition of biworlds, (v')_'∉A̅^(w')_'+1 implies that (v')_'∈ A^(w')_'+1. By definition of restriction, this implies that v = (v')_-1∈ A^(w')_ = A^w, as required. * -1 > '. By definition of restriction, (v')_'∉A̅^(w')_'+1 implies that v = (v')_-1∉A̅^(w')_ = A̅^w. By the definition of biworlds, this implies that v ∈ A^w, as required. * Suppose v ∈A̅^w. We distinguish two cases: [1.] * v ∈ A^w. In this case, v ∈ A^w ∩A̅^w, so by <ref> of <ref>, v is not completed and w is not completed. Since w' is post-completed, there is a ' < ω^2 such that (w')_' is completed. By <ref> of <ref>, we can choose ' to be a successor ordinal. Then A^(w')_'∩A̅^(w')_' = ∅. Since (w')_ = w is not completed, <ref> of <ref> implies that ' >. So (w')_'≥_p w, so by the definition of the precision relation, A̅^(w')_' = A̅^w. Since v ∈A̅^w, there exists a v^* v such that v^* ∈A̅^(w')_'. Since A^(w')_'∩A̅^(w')_' = ∅, we get that v^* ∉ A^(w')_'. By Corollary <ref>, there is a world v' v^*. Then v' v and (v')_' = v^* ∉ A^(w')_'. So by the definition of R_A^w', it follows that v' ∉ R_A^w', as required. * v ∉ A^w. By Corollary <ref>, there is a world v' such that v' ≥_p v. Since v = (v')_-1, it follows by the definition of R_A^w' that v' ∉ R_A^w'. * Let v' ∉ R_A^w' be post-completed. Then, there is some ' < ω^2 such that (v')_'∉ A^(w')_'+1. Define v := (v')_-1. Clearly v' v. So it is now enough to prove that v ∈A̅^w. For this we distinguish two cases: [1.] * -1 ≤'. By the definition of biworlds, (v')_'∉ A^(w')_'+1 implies that (v')_'∈A̅^(w')_'+1. By definition of restriction, this implies that v = (v')_-1∈A̅^(w')_ = A̅^w, as required. * -1 >. By definition of restriction, (v')_'∉ A^(w')_'+1 implies that v = (v')_-1∉ A^(w')_ = A̅^w. By the definition of biworlds, this implies that v ∈A̅^w, as required. Now we are ready to prove Proposition <ref>: Suppose ϕ^'= for all worlds '∈∩ U^ω^2, and let ∈. By Theorem <ref>, (ϕ)+1 resolves ϕ. Moreover, by <ref>, (ϕ)+1∈_(ϕ)+2. By <ref> of <ref>, there exists a world '∈∩ U^ω^2 such that '(ϕ)+1. By Proposition <ref> and our assumption on worlds in ∩ U^ω^2, we have ϕ^(ϕ)+1ϕ^'=. Since (ϕ)+1 resolves ϕ, we have ϕ^(ϕ)+1=. By Proposition <ref>, ϕ^(ϕ)+1ϕ^. Hence, we have ϕ^= as desired. The other direction is trivial. The following lemma is of central importance, as it establishes a relationship between the two-valued and the three-valued valuation and can be used to prove multiple important theorems. If ϕ∈, w is an (ϕ)-biworld and w' ≥_p w is a world, then ϕ^w = ϕ^,w'. By Proposition <ref>, ϕ^w = ϕ^w'. So all that is left to be shown is that ϕ^w' = ϕ^,w'. We prove this proposition by induction over the complexity of ϕ. So suppose as induction hypothesis that the lemma holds for all subformulas ψ of ϕ. In order to prove that the lemma holds for ϕ, we distinguish the following cases: * ϕ is P. Then ϕ^w' = ϕ^w' = ϕ^,w'. * ϕ is ψ_1 ψ_2. Then ϕ^w' = (ψ_1 ψ_2)^w' = _≤_t(ψ_1^w',ψ_2^w') = _≤_t(ψ_1^,w',ψ_2^,w') by induction hypothesis = (ψ_1 ψ_2)^,w' = ϕ^,w'. * ϕ is ψ. Then ϕ^w' = (ψ)^w' = (ψ^w')^-1 = (ψ^,w')^-1 = (ψ)^,w' = ϕ^,w'. * ϕ is Aψ. Then (ϕ) is a successor ordinal, so ϕ^w' = (Aψ)^w' = _≤_t{ψ^w^*| w^* ∈ A^w'}. By <ref> of Theorem <ref>, every ω^2-biworld w^* can be extended to a completed ω^2+1-biworld w^+, and by Proposition <ref>, ψ^w^* = ψ^w^+. Thus ϕ^w' = _≤_t{ψ^w^+| w^+ ∈ R_A^w'}, which by the induction hypothesis equals (Aψ)^,w', as required. * The proof is analogous to the one for the case ϕ = Aψ. * ϕ is E_G ψ. Then ϕ^w = (E_G ψ)^w = _≤_t{(Aψ)^w| A∈ G} = _≤_t{(Aψ)^,w'| A∈ G} by induction hpothesis = (E_G ψ)^,w' = ϕ^,w'. * ϕ is C_G ψ. Then ϕ^w = (C_G ψ)^w = _≤_t{(E_G^kψ)^w| k∈ℕ} = _≤_t{(E_G^kψ)^,w'| k∈ℕ} by induction hpothesis = (C_G ψ)^,w' = ϕ^,w'. Combining Lemma <ref> with Proposition <ref>, we directly get the following theorem, which The following theorem tells us that the two-valued and three-valued valuations fully coincide on worlds, and thus expresses the relationship between the two-valued and the three-valued valuation in a simpler way than Lemma <ref> does: If ϕ∈ and w∈ U, then ϕ^w = ϕ^,w. Finally, Lemma <ref> can also be used to prove the following theorem, which The next theorem is of central importance to show that our semantics generally avoids a problem that some previous accounts of only knowing and common knowledge had, namely the problem that A C_G p is not satisfiable in those accounts, even though it should be (see Section <ref> for a discussion of this problem in other accounts). The following theorem shows that no such problems can arise in the ω^2+1-completed Kripke structure : Let be an agent. For every formula ϕ∈, there is a world w such that (Aϕ)^w = (Aϕ)^,w =. Moreover, if _1 and _2 are two such worlds, then ^_1=^_2 and ^_1=^_2. By Theorem <ref>, (Aϕ)^w = (Aϕ)^,w for any world w. Hence it is enough to show that there is a world w such that (Aϕ)^,w =. Let v be an arbitrary 0-biworld. Define W := { u|_ω^2| u ∈ U and ϕ^,u = }, W := ^ω^2∖ W, v' := (v,(W)_A ∈,(W)_A ∈). Clearly v' is a world. It follows directly from the definition of v' that (Aϕ)^,v' = and that (M_A ϕ)^,v' =, that (Aϕ)^,v' =. For the second statement, let _1,_2 be worlds such that (Aϕ)^_1=(Aϕ)^2=. By the definition of the only knowing operator, we have (Aϕ)^_i=(M_Aϕ)^_i= for i∈{1,2}. This means that ϕ^= for all ∈^_1∪^_2, and ϕ^= for all ∈^_1∪^_2. Since ^u∪^u=^ω^2 for any world u, we must have ^_1 =^_2={v∈^ω^2|ϕ^v=} ^_1 =^_2={v∈^ω^2|ϕ^v=}. We will now show that the two-valued valuation of (and thus by Theorem <ref> also the three-valued valuation, when restricted to suitable biworlds) gives rise to a sensible entailment relation between formulas of . We define this relation as follows: Let ϕ∈ be a formula, and Γ⊆ be a set of formulas. We write Γϕ if ϕ^, = for every world such that ψ^,= for all ψ∈Γ. Note that this is definition of the entailment relation does not coincide with the standard way of defining the entailment with respect to a Kripke semantics, as in our case we fix a canonical Kripke structure rather than quantifying over all Kripke structures. The fact that this entailment relation behaves in a sensible way is captured by the properties listed in the following theorem: Let ϕ, ψ∈ two formulas, Γ, Γ'⊂ two sets of formulas, ∈ an agent, and G⊆ a non-empty set of agents. Then, the following properties hold: * (Prop) For each propositional tautology ϕ, we have ϕ. * (MP) ϕ, ϕ⇒ψψ. * (Mono) If Γϕ, then Γ, ψϕ. * (Cut) If Γϕ and Γ', ϕψ, then Γ, Γ' ψ. * (K) (A(ϕ⇒ψ)Aϕ)⇒Aψ. * (Nec) If ϕ, then Aϕ. * (M) If ϕψ, then M_AϕAψ. * (O) Aϕ. * (Fixed point axiom) C_G ϕ E_G(ϕ∧ C_Gϕ). * (Induction rule) If ϕ E_G(ϕ∧ψ), then ϕ C_Gψ. * Let ϕ be any propositional tautology with atoms P_1, …, P_k, and ξ_1, …, ξ_k be formulas in . For every i∈[1,k], we can substitute P_i with ξ_i in ϕ to obtain ϕ_ξ. Let w be a world. Since for every formula ψ∈ the evaluation ψ^, w cannot be undefined, for every i∈[1,k], we can assign to P_i the truth value given by ξ_i^,w. Since ϕ is a propositional tautology, it must be true with the above truth assignments. Hence, ϕ_ξ^,w= by the definition of the evaluation in the Kripke structure. * Let ∈ U^ω^2 such that ϕ^,= and (ϕ⇒ψ)^,=. Notice that (ϕ⇒ψ)^,=((ϕψ)^,)^-1. Hence, _≤_t(ϕ^,, (ψ)^,)=. Since ϕ^,=, we have (ψ^,)^-1=(ψ)^,=, ψ^, =, as desired. * Clear by the definition of logical consequence relation. * Let ∈ U such that ξ^,= for all ξ∈Γ∪Γ'. Since Γϕ, we have ϕ^,=. Then, ψ^,= because Γ', ϕψ. * Let ∈ U. We want to show that ((A(ϕ⇒ψ)Aϕ)⇒Aψ)^,=. Notice that V: =((A(ϕ⇒ψ)Aϕ)⇒Aψ)^, = (((A((ϕψ))Aϕ)Aψ))^, =(_≤_t( S, ((Aψ)^,)^-1))^-1, where S:=_≤_t((A((ϕψ)))^,, (Aϕ)^,). If (Aψ)^,=, then V=, as desired. Now, we assume (Aψ)^,=. Observe that V reduces to just V=(_≤_t((A((ϕψ)))^,, (Aϕ)^,))^-1. If (Aϕ)^,=, then again V=, as desired. So, we suppose (Aϕ)^,=, ϕ^,= for all ∈ R_A^. Hence, we have that (A((ϕ ψ)))^,= {((ϕψ))^,|∈ R_A^} = {(_≤_t(ϕ^,, (ψ^,)^-1))^-1|∈ R_A^} = {((ψ^,)^-1)^-1|∈ R_A^} ={ψ^,|∈ R_A^}=(Aψ)^,=, which implies V=, as desired. * Let ∈ U. By definition, (Aϕ)^,=glb_≤_t{ϕ^,w'|w'∈ R_A^}. Since ϕ^,'= for any '∈ U by hypothesis, we have (Aϕ)^,=, as desired. * Since ϕψ, there exists ^∗∈ U such that ϕ^,^∗= and ψ^, ^∗=. Let ∈ U be such that (M_Aϕ)^,=. Then, R_A^w ⊇{w ∈ U |ϕ^, = }, and in particular ^∗∈ R_A^. Hence, (Aψ)^,=_≤_t{ψ^,'|'∈ R_A^}=. * By Theorem <ref>, there exists a world such that (Aϕ)^,=. Since ^,=, we have Aϕ. * Let ∈ U. By Definition <ref>, we have the following double implications: (E_G( ϕ∧ C_Gϕ))^,= ∀∈ G, ( A(ϕ∧ C_Gϕ ))^, = ∀∈ G, ∀'∈, (ϕ∧ C_Gϕ)^,'= ∀ A∈ G, ∀'∈, ϕ^,'= and (C_Gϕ)^,'= ∀ A∈ G, ∀'∈, ∀ k≥ 1, (Aϕ)^,'= and (E_G^kϕ)^,'= ∀ A∈ G, ∀ k≥ 2, (E_Gϕ)^,= and (A(E_G^k-1ϕ))^,= ∀ k≥ 2, (E_Gϕ)^,=, and (E_G^k-1ϕ)^,= (C_Gϕ)^,=. * Let ∈ U such that ϕ^,=. We say that a world ∈ U^ω is 1-reachable from if ∈ R^_A for some A∈ G. For k>1, we say that a world ∈ U^ω is k+1-reachable from if is 1-reachable from a world ' that is k-reachable from . We show by induction on k that for all k∈ℕ_≥ 1, for all k-reachable worlds from , (ψφ)^,=. In particular, this proves that E^k_G(ψ)^,= for all k>0, (C_Gψ)^,=. * k=1. Let be a world 1-reachable from . By hypothesis, ϕ^,= and φ E_G(φ∧ψ), thus (E_G(φ∧ψ))^,=. By definition of E_G and of 1-reachability, we must have (φ∧ψ)^,=, as desired. * k=k'+1. Let be a world k-reachable from . By definition of k'-reachability, there exists a world ' k'-reachable from such that is 1-reachable from '. By induction hypothesis, (φ∧ψ)^,'=, and φ^,'= in particular. Thus, by hypothesis, we get (E_G(φ∧ψ))^,'=. By the definition of the valuation of the operator E_G, it follows that (φ∧ψ)^,=, as desired. Properties <ref>, <ref>, <ref>, <ref>, and <ref> of <ref> ensure that the semantics properly captures the intended meaning of the only-knowing operator. In more detail, properties <ref>, <ref>, <ref>, and <ref> imply that for every ϕ and ψ, either AϕAψ or Aϕ¬Aψ. More specifically, the former holds when ψ is entailbed by ϕ, the latter otherwise. This means that Aφ completely determines the agent's knowledge. Property <ref> states that for any formula ϕ∈, there is a world in which Aϕ holds, i.e. it is possible that an agent knows ϕ and knows nothing beyond ϕ. The following example discusses the construction of a world w that shows that A C_G p is satisfiable (in line with item 8 of Theorem <ref>), something that previous attempts at combining only knowing and common knowledge failed at, even though it should intuitively be the case. Consider the setting of <ref>. We want to construct a world that satisfies A C_G p, where G={A}. By the definition of the only knowing operator, we must have ^= {'∈^ω^2| (C_Gp)^'=} ^= {'∈^ω^2| (C_Gp)^'=}. We first have to find the above sets. Notice that, by <ref>, we have ∪=^ω^2, as supposed. By <ref>, <ref>, and the fact that any ω^2 is the extension of some ω, we can reduce to finding the set of ω satisfying C_Gp. It is not hard to see that v from <ref> satisfies C_Gp. Moreover, since v is completed, by Corollary <ref>, v has a unique extension v' to depth ω^2, and v'. In an analogous fashion, we can build another ω^2 u' satisfying C_Gp by considering the unique extension of a completed ω u:=(u_α)_α<ω defined as u_α:={∅} if α=0 (u_0, {v_α'}, ^α') if α=α'+1 Intuitively, both v' and u' are worlds in which p is common knowledge (for the only agent ), but p is true in the objective world of v' and false in the one of u'. In particular, in u' the agent is not truthful. We claim[The claim can be proven by induction, and it can already be seen to hold true by writing down all the eighteen 1, and reasoning on the conditions that make C_Gp satisfied. We omit this reflection for the sake of conciseness.] v and u are the only ω satisfying C_Gp. Hence, A^w must be ^ω^2∖{v',u'} and A̅^w must be {v', u'}, and we define w:=({p},^ω^2∖{v',u'},{v', u'}). Notice that by <ref>, w is unique up to change of objective world. In other words, the world defined as w but with {∅} as objective world w_0 satisfies A C_Gp too. § TRUTHFULNESS AND INTROSPECTION In this section we present a modification of the semantics of that is based on the presupposition that agents are truthful and (both positively and negatively) introspective. Treating this as a presupposition of the logic amounts to treating it as common knowledge common knowledge that all agents satisfy these properties. In a similar way one could also incorporate only some of these properties into the logic, e.g. only positive and negative introspection for a formalization of belief rather than knowledge. In this section we focus on the two-valued valuation and therefore work only with the elements of the Kripke structure , the worlds. We start with a definition of truthfulness as well as positive and negative introspection: Let w be a world. * w is called truthful if for all ∈, ∈; * w is called positively introspective if for each ∈, for each '∈, '⊆; * w is called negatively introspective if for each ∈, for each '∈, ⊆'. The following theorem establishes that the definitions of these three properties imply the expected characterization of these properties in terms of formulas: Let be a world, ∈ an agent, and ϕ∈ a formula. Then the following statements hold: * If is truthful, then satisfies Aϕ⇒ϕ. * If is positively introspective, then satisfies Aϕ⇒AAϕ. * If is negatively introspective, then satisfies Aϕ⇒AAϕ. * Suppose is truthful. If (Aϕ)^,w=, then ϕ^,'= for all '∈. Since w is truthful, ∈. Hence, ϕ^,=. * Suppose is positively introspective. If (Aϕ)^,=, then for each '∈, ϕ^,'. Let ' ∈ and ”∈'. By positively introspection of , ”∈. Hence, ϕ^,”. Since ' and ” are chosen arbitrarily, we get (AAϕ)^, =. * Suppose is negatively introspective. If (Aϕ)^,=, then there exists '∈, such that ϕ^,'=. By negative introspection of , for each ”∈, '∈⊆”. Hence, (AAϕ)^,, as desired. The properties listed in <ref> hold true in the Kripke structure , whereas the truthfulness, positive introspection, and negative introspection of every agent are not always true for a world of . We would like to restrict the Kripke structure to a substructure _𝑇𝐼 which contains only worlds in which all agents are truthfull, positively introspective, and negatively introspective. A world is called truthful-introspective if it is truthful, positively introspective, and negatively introspective. A world w is called recursively truthful-introspective if w is truthful-introspective and all the worlds that are reachable from w through the union ⋃_A∈ R_A of all accessibility relations are truthful-introspective. We now define :=(U^𝑇𝐼, (R^𝑇𝐼_A)_A∈) to be the Kripke substructure of , where the underlying world set U^𝑇𝐼 is the set of all recursively truthful-introspective worlds, and the accessibility relations are just the ones coming from _𝑇𝐼 being a substructure, for all ∈, R^𝑇𝐼_A:=R_A∩ (U^𝑇𝐼× U^𝑇𝐼). Now we can define a modified entailment relation that takes into account truthfulness and introspection: For ϕ∈ and Γ⊆, we write Γϕ if ϕ^, = for every world such that ψ^,= for all ψ∈Γ. Note that incorporating truthfulness and introspection into the logic in this way modifies the meaning of the modalities M_A and A, because knowing at most ϕ means that any ψ that one knows must be entailed by ϕ and the entailment relation is different now that truthfulness and introspection are hard-coded into the logic: ψ may be entailed by ϕ together with truthfulness and introspection even though it was not entailed by ϕ itself in the original semantics of without truthfulness and introspection. In other words, when ϕψ but ϕψ, then M_A ϕAϕ even though M_A ϕAϕ. If space is an issue, we could remove the following two propositions and the paragraphs preceding them from the paper. The following proposition ensures that for a world w in , the worlds that are R^𝑇𝐼_A-accessible from w are precisely the same as the worlds that are R_A-accessible from w: Let ∈ U, ∈, and '∈∩ U. If is recursively truthful-introspective, then so is '. Since ' is reachable by in , every world reachable by ' is also reachable by . Hence, every world reachable by ' in is truthful-introspective, as desired. As in standard Kripke semantics for epistemic logic, truthfulness together with positive and negative introspection means that the accessibility relations are equivalence relations: The accessibility relations of _𝑇𝐼 are equivalence relations. Let , u, v∈ U^𝑇𝐼, and ∈. * Reflexivity: by truthfulness of , w∈, (, )∈ R^𝑇𝐼_. * Simmetry: suppose (,)∈ R^𝑇𝐼_. By negative introspection and truthfulness of , ∈⊆v. Hence, (v,)∈ R^𝑇𝐼_. * Transitivity: suppose (u, v), (v,w)∈ R^𝑇𝐼_. This means v∈u and w∈v. By positive introspection of u, v⊆u. Hence, w∈u, (u,w)∈ R^𝑇𝐼_, as desired. The following theorem establishes that the properties that we established for in Theorem <ref> also hold for : I'm pretty sure that (O) also holds for , so it should be included for completeness. However, as far as I can see, proving (O) for requires defining TI-biworlds (in which truthfulness and introspection are built in from the start) and reproving a lot of the results about biworlds, the three-valued valuation and its relation to the two-valued valuation for this semantics with truthfulness and introspection built in from the start. Do you see any possibilities for shortcuts? If not, how should we proceed about this issue? Let ϕ, ψ∈ be two formulas, Γ, Γ'⊂ two sets of formulas, ∈ an agent, and G⊆ a non-empty set of agents. Then, the following properties hold: * (Prop) For each propositional tautology ϕ, we have ϕ. * (MP) ϕ, ϕ⇒ψψ. * (Mono) If Γϕ, then Γ, ψϕ. * (Cut) If Γϕ and Γ', ϕψ, then Γ, Γ' ψ. * (K) (A(ϕ⇒ψ)Aϕ)⇒Aψ. * (Nec) If ϕ, then Aϕ. * (M) If ϕψ, then M_AϕAψ. * (Fixed point axiom) C_G ϕ E_G(ϕ∧ C_Gϕ). * (Induction rule) If ϕ E_G(ϕ∧ψ), then ϕ C_Gψ. * Clear by <ref> of <ref> and the fact that is a substructure of . * Analogous to the proof of <ref> of <ref>. * Clear by the definition of logical consequence relation. * Analogous to the proof of <ref> of <ref>. * Clear by <ref> of <ref> and the fact that _𝑇𝐼 is a substructure of . * Let ∈ U^𝑇𝐼. By definition of , (Aϕ)^_𝑇𝐼,=(Aϕ)^,=glb_≤_t{ϕ^,w'|w'∈ R_A^}. Since ϕ^_𝑇𝐼,'= for any '∈ U^𝑇𝐼 by hypothesis, and R_A^⊆ U^𝑇𝐼, we have (Aϕ)^_𝑇𝐼,=, as desired. * Analogous to the proof of <ref> of <ref>. * Clear by <ref> of <ref> and the fact that _𝑇𝐼 is a substructure of . * By <ref>, the proof is analogous to the one of <ref> in <ref>. § TRUTHFULNESS AND INTROSPECTION In the previous three sections we have defined and described the construction of a structure of worlds that is rich enough to allow to formally define the semantics of only knowing and common knowledge in a way that matches basic intuitions about these logical modalities in a precisely specified way. A major challenge in designing this construction was to ensure that we have enough worlds to describe all logically possible epistemic states. For this reason, we decided to keep the construction as general as possible, not to unnecessarily limit the set of worlds. However, there are certain properties of the knowledge modality that are often taken for granted in epistemic logic and that require limiting the set of worlds. In particular, the following properties are often assumed to hold: * Truthfulness: Aϕ⇒ϕ is satisfied in every world for every agent A. * Positive introspection: Aϕ⇒AAϕ is satisfied in every world for every agent A. * Negative introspection: Aϕ⇒AAϕ is satisfied in every world for every agent A. While all three of these properties are commonly assumed in epistemic logic, there are specific issues about ensuring the first or the third in a logic with a modality Aϕ for only knowing or a modality M_A ϕ for knowing at most. In the case of truthfulness, there is an issue concerning the formula A (A p A q). Given the definition of the modality A, this entails A (A p A q), which by truthfulness entails A p A q, so either A p or A q has to be true. But since A only knows A p A q and since A p A q entails neither p nor q, neither p nor q can be known, a contradiction. Thus A (A p A q) cannot be satisfiable in a logic with truthfulness, which means that principle (O) from Theorem <ref> cannot hold in such a logic (assuming principles (Prop), (Cut) and (M) do hold). In the case of negative introspection, a more severe problem arises. Suppose M_A q is true in some world w. Since q does not entail p, by (M) this should entail that A p is true in w, so by negative introspection, AA p is true in w. But since q does not entail A p, principle (M) also allows us to conclude that AA p is true in w, a contradiction. Thus M_A q is not satisfiable, and similarly no formula of the form M_A ϕ or Aϕ is satisfiable for any satisfiable ϕ. It should be stressed that this problem is not caused by our semantic approach to only knowing, but is a direct consequence of basic properties that only knowing has been assumed to satisfy also in other papers. A reader familiar with the literature on only knowing, of which we give an overview in Section <ref>, may wonder why this problem was not identified in previous papers, e.g. <cit.>, who define a semantics for only knowing in a logic with negative introspection. What the authors did not realize is that by enforcing negative introspection in their logic, they actually made all statements of the form Aϕ unsatisfiable (for any satisfiable formula ϕ). They wrongly claim on page 5 that O_i (p C p) is satisfiable, but the alleged proof is wrong. If one defines V^1 ={w | p,q ∈ w, w ∈𝒲}, V^k+1 = {(w,V^k,…,V^k) | w ∈ V^1}, f'(i,k) = V^k and w' = {p}, then one can easily see that f' ∉ f_i^w' and f',w' p C p, contradicting their claim that f,w O_i (p Cp). In order to avoid this problem, one would need to make use of autoepistemic logic AEL, or some multi-agent version thereof <cit.> in the definition of the semantics of M_A ϕ: Intuitively, M_A ϕ should be true in a world in which all worlds that satisfy all formulas entailed by the autoepistemic theory {ϕ} are accessible. This would give rise to a logic in which a variant of principle (M) with a negated autoepistemic entailment in the place of the negated entailment is satisfied. We leave it to future work to develop the details of such a theory and investigate whether it behaves as intended. Given that it is somewhat problematic to incorporate truthfulness and negative introspection in a logic with only knowing, whereas no similar problems arise for positive introspection, we describe how the semantic framework from the previous sections can be used to define a logic of only knowing and common knowledge in which positive introspection is ensured. A world w is called positively introspective (PI) if for each ∈ and for any worlds w', w”, w” R_A w' R_A w, implies w” R_A w. A world w is called recursively PI if w is positively introspective and all the worlds that are reachable from w through the union ⋃_A∈ R_A of all accessibility relations are positively introspective. We now define _𝑃𝐼:=(U^𝑃𝐼, (R^𝑃𝐼_A)_A∈) to be the Kripke substructure of , where the underlying world set U^𝑃𝐼 is the set of all recursively PI worlds, and the accessibility relations are just the ones coming from _𝑃𝐼 being a substructure, for all ∈, R^𝑃𝐼_A:=R_A∩ (U^𝑃𝐼× U^𝑃𝐼). Now, we can define a modified entailment relation that takes into account positive introspection: For ϕ∈ and Γ⊆, we write Γϕ if ϕ^_𝑃𝐼, = for every world such that ψ^_𝑃𝐼,= for all ψ∈Γ. Note that incorporating positive introspection into the logic in this way modifies the meaning of the modalities M_A and A, because knowing at most ϕ means that any ψ that one knows must be entailed by ϕ and the entailment relation is different now that truthfulness and introspection are hard-coded into the logic: ψ may be entailed by ϕ together with truthfulness and introspection even though it was not entailed by ϕ itself in the original semantics of without truthfulness and introspection. In other words, when ϕψ but ϕψ, then M_A ϕAϕ even though M_A ϕAϕ. The following theorem establishes that positive introspection does indeed hold in this logic and that the properties that we established for in Theorem <ref> also hold for . Let ϕ, ψ∈ be two formulas, Γ, Γ'⊂ two sets of formulas, ∈ an agent, and G⊆ a non-empty set of agents. Then, the following properties hold: 1-10. All properties mentioned in Theorem <ref> with replaced by . 11. (PI) Aϕ⇒AAϕ THIS SHOULD BE LAST. To make numbering the same All properties apart from (PI) and (O) can be established in a way analogous to the proof of Theorem <ref>. * (PI) Suppose is positively introspective and suppose (Aϕ)^_𝑃𝐼,=. Suppose w” R_A w' R_A w. Since is positively introspective, it follows that w” R_A w, that ϕ^_𝑃𝐼,”=. Since w” was arbitrary, we can conclude that (Aϕ)^_𝑃𝐼,'=. Since w' was arbitrary, this in turn implies that (AAϕ)^_𝑃𝐼,=, as required. * (O') For establishing this result, we need a result similar to the first statement of Theorem <ref>: For every agent and every formula ϕ∈, there is a world w ∈ U^𝑃𝐼 such that (Aϕ)^_𝑃𝐼,w =. Let v be an arbitrary 0-biworld. W := { u|_ω^2| u ∈ U^𝑃𝐼 and (ϕAϕ)^,u = }, W := ^ω^2∖ W, w := (v,(W)_A ∈,(W)_A ∈). Clearly w is a world. In order to show that w ∈ U^𝑃𝐼, we need to show that w is recursively PI. From the definition of w it is easy to see that for this, it is enough to show that w is positively introspective. So suppose A ∈ and w” R_A w' R_A w. We need to show that w” R_A w. The fact that w' R_A w means that w'|_ω^2∈ W, (Aϕ)^_𝑃𝐼,w' =. Since w” R_A w', it follows that ϕ^_𝑃𝐼,w” =. Since w' is positively introspective, the fact that (Aϕ)^_𝑃𝐼,w' = implies that (AAϕ)^_𝑃𝐼,w' =, that (Aϕ)^_𝑃𝐼,w” =. So (ϕAϕ)^_𝑃𝐼,w” =, w”|_ω^2∈ W = A^w, w” R_A w, as required. It follows directly from the definition of w that (Aϕ)^_𝑃𝐼,v' = and that (M_A ϕ)^_𝑃𝐼,v' =, that (Aϕ)^_𝑃𝐼,v' =. Given that we could develop a theory with positive introspection, one may wonder what happens if one tries to similarly add negative introspection and/or truthfulness. Due to the problems with negative introspection described above, naively adding negative introspection in this way will yield a logic in which M_A ϕ is not satisfiable for any ϕ. Adding truthfulness, on the other hand, does not cause such problems. Indeed, truthfulness and positive introspection can meaningfully be added together in a way similar to how positive introspection was added in the above definitions, yielding an entailment relation _𝑇𝑃𝐼. Apart from (O), the properties of Theorem <ref> will still hold. We conjecture that the following weakening of (O') holds in this context: (O') If ϕ is a formula not involving a modality with subscript A, then Aϕ_𝑇𝑃𝐼. The proof of this conjecture is left to future work. § RELATED WORK In this paper, we have studied the interplay of common knowledge and only knowing. The former concept is quite well-known and has been extensively studied <cit.> since its first mentions in the philosophical <cit.>, and the mathematical literature <cit.>. The latter concept is younger and has been studied intensively more recently. <cit.> was among the first to introduce the notion of only knowing[There are several closely-related notions, like ignorance <cit.>, minimal knowledge <cit.>, and total knowledge <cit.>.] by presenting a single-agent logic of belief extended with a novel operator O expressing that the agent's beliefs are exactly the ones implied by the knowledge base and nothing more. He intended his logic of only knowing to capture certain types of non-monotonic reasoning patterns, like autoepistemic logic (AEL) <cit.>. In the 1990s and early 2000s, single-agent only knowing was successfully studied and implemented <cit.>, and <cit.> further revealed its potential by showing that it is also possible to capture default logic (DL) <cit.> and a variant of AEL proposed by <cit.>. For journal version, we need to reconsider the following paragraph, which seems unplausible after the recent discovery of a big problem in combining negative introspection with only knowing: Only knowing does not present particular problems in its formalization if the agent is assumed to be positive and negative introspective. Analogously, assuming mutual introspection for the multi-agent case would allow to avoid any major issue. However, while this assumption about introspection is sensible for single-agent only knowing, it is rather unintuitive for most applications of multi-agent only knowing. Unfortunately, removing the assumption of mutual introspection brings to the table a new set of issues. <cit.> and <cit.> were among the first to extend Levesque's only knowing to a multi-agent setting. In a joint publication, they () improved upon their independent works with an axiom system satisfying all the desired properties for only knowing. Nevertheless, the proposed axiomatization forces to include directly in the language a validity operator and the resulting semantics is not “as natural as we might like”, according to the authors. In the early 2000s, first <cit.> alone and then together with Solhaug <cit.> tried another route to generalize Levesque's axioms, without encoding the notion of validity into the language itself. Yet, once again these models and the logic itself feel complex and unnatural. An in-depth analysis of the issues of all the previously mentioned works on multi-agent only knowing is provided by Belle and Lakemeyer  (), together with a natural way to avoid such problems. Belle and Lakemeyer proposed to use models limited to a finite depth k, models at which beliefs can be nested at most k times, by introducing the concept of k-structure to represent an agent's epistemic state. In this way, they successfully extend Levesque's logic, by keeping the original idea of Levesque's worlds and generalizing its features at the same time. However, they do not take into account common knowledge. <cit.> proposed a novel formulation comprising both common knowledge and only knowing at level k. To achieve this result, they make use of the concept of so-called k+1-canonical formulas <cit.>, written as conjunctions of a 0-canonical formula and k-canonical formulas nested in knowledge and common knowledge operators. Even though the proposed pointed epistemic models can be fully characterized up to modal depth k by a k-canonical formula, such representation at depth k might feel slightly unnatural, as it characterizes the knowledge of an agent only up to level k, but it also determines the common knowledge of a group of agents, which is infinitary in nature. When disregarding the conjunction of the common knowledge operators in such formulas, one can see a correspondence between our ks and the proposed k-canonical formulas. However, there is an important limitation to this work. Namely, while our operator fully captures the knowledge and ignorance of an agent at any depth, the ^n operator proposed by <cit.> expresses only knowing for an agent just up to level n in the sense that an expression of the form ^1 ϕ should be read as “If I disregard all my knowledge deeper than level one, I only know ϕ”. Formally, as Aucher and Belle pointed out, for any two modal depths n>m, if agent only knows ϕ at depth n, then only knows ϕ at depth m, ^n →^m, but the reverse implication might not hold. The limitation of the only knowing operator to a finite depth is overcome in the same year by <cit.>. They devised an alternative approach to bring common knowledge and only knowing together, the semantic structures of which are close to our ω-biworlds (except that there is no representation of the set Aω). However, as explained in Remark <ref>, due to hard-coding negative introspection in their logic, all non-trivial formulas of the form O_Aϕ are unsatisifiable, which goes against what we expressed as Property <ref> in <ref>. Hence, such a semantics does not properly capture the intended meaning of the only-knowing operator. <cit.> observed that in <cit.>, the formula A Cp cannot be satisfied, and this holds even if negative introspection is dropped (to avoid the problem mentioned in Remark <ref>). <cit.> attempted to overcome this problem by proposing four different semantics, but he does not entirely fullfill his purpose. In more detail, two of the proposed semantics (Section 6.3 of <cit.>) do not allow for arbitrary nesting of the only knowing operator and thus they do not cover the whole language. A third semantics (Section 6.2.2 of <cit.>) seems to solve the problem regarding the satisfiability of formulas like A Cp, but it is not precision-monotonic. This is rather problematic: in a given μ-world in such semantics, O_A ϕ might be true, but if we give more precise information (extending it to depth μ+1), the formula might become false. Hence, one could only define what an agent only knows “up to a certain depth”, as in <cit.>. I feel like this problem is a problem inside of Pieter's theoretical framework, but it should be possible to state a problem caused by it that is accessible to people who do not know about the details of Pieter's approach like the precision ordering between worlds. Finally, the semantics presented in Section 6.2.1 makes use of λ-canonical Kripke structures, for any fixed limit ordinal λ, to define a two-valued valuation for . In particular, if λ=ω, then the proposed semantics corresponds to the semantics of <cit.> with some minor adjustments. The hope of the author was that by choosing a large enough limit ordinal λ, the satisfiability issue would be solved. Unfortunately, there exists formulas, such as AA formalizing the statement “all knows is that their knowledge is consistent”, that are not satisfiable in any λ-canonical Kripke structure. Even though the formula just mentioned may seem like a corner case, its satisfiability guarantees something quite desirable, anmely that our semantic structures (the biworlds) are rich enough to allow the agents to identify precisely those worlds in which they have consistent knowledg As shown in Theorem <ref>, such satisfiability problems do not occur with our definition of worlds. § CONCLUSION We defined a multi-agent epistemic logic with common knowledge and only knowing operators, which successfully encodes these notions under the same framework. First, we introduced the novel concept of for countable ordinals , which approximates not only the worlds that an agent deems possible, but also those deemed impossible. This duality proved to be fundamental to successfully deal with the only knowing operator in a multi-agent setting. Moreover, we have shown that the proposed new definitions are indeed sensible, as they satisfy the properties one would expect (<ref>). Second, we defined the language , extending propositional logic with the modal operators A, M_A, E_G, and C_G, and a three-valued model semantics for it. Furthermore, we defined a canonical Kripke structure over completed ω^2+1, and the two-valued semantics obtained from it is shown to coincide with the model semantics. This allowed us to prove several desirable properties (<ref>) the resulting logic satisfies. In particular, we showed that for any formula φ, there is a unique state-of-mind of a given agent in which they only know φ. Finally, we have considered how our framework can be extended to satisfy properties like truthfulness, positive introspection and negative introspection. For positive introspection we have shown some positive results, whereas for truthfulness and negative introspection, we have identified certain problems that arise when combining them with only knowing. We have motivated the need for further research related to these problems. Another line of future work that we envisage is to generalize the construction of biworlds so that it becomes applicable to other areas of research. More concretely, this amounts to the construction of a set-theoretic universe in which there exists a universal set, similarly as in topological set theory. If one takes this alternative set theory as one's metatheory, the incorrect definition from the introduction of this paper could very easily be turned into a correct definition. THIS COMMENT SHOULD BE AT THE START OF PAGE 10 § ACKNOWLEDGEMENTS We are grateful to Marc Denecker and Pieter Van Hertum for the fruitful discussions and feedback on earlier versions of this work. This work was partially supported by Fonds Wetenschappelijk Onderzoek – Vlaanderen (project G0B2221N) and by the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” kr
http://arxiv.org/abs/2306.01931v1
20230602221205
Exploring semantic information in disease: Simple Data Augmentation Techniques for Chinese Disease Normalization
[ "Wenqian Cui", "Shaohui Liu", "Xiangling Fu", "Xien Liu", "Ji Wu" ]
cs.CL
[ "cs.CL", "cs.AI" ]
UTF8gbsn Variable Resolution Sampling and Deep Learning-Based Image Recovery for Faster Multi-Spectral Imaging Near Metal Implants [ ========================================================================================================================= The disease is a core concept in the medical field, and the task of normalizing disease names is the basis of all disease-related tasks. However, due to the multi-axis and multi-grain nature of disease names, incorrect information is often injected and harms the performance when using general text data augmentation techniques. To address the above problem, we propose a set of data augmentation techniques that work together as an augmented training task for disease normalization. Our data augmentation methods are based on both the clinical disease corpus and standard disease corpus derived from ICD-10 coding. Extensive experiments are conducted to show the effectiveness of our proposed methods. The results demonstrate that our methods can have up to 3% performance gain compared to non-augmented counterparts, and they can work even better on smaller datasets. § INTRODUCTION The disease is a central concept in medical text processing problems. One of the most important tasks, i.e. disease normalization, uses diseases as both input and output to match the diagnoses terms used in clinical documents to standard names in ICD coding. The disease normalization task mainly faces the following three challenges. First, different writing styles. The writing styles of the diseases can be diversified, where different doctors have different writing habits, so a single disease might result in thousands of versions of names. Second, data scarcity, where some diseases may not be covered in the training set, which often leads to few-shot or zero-shot scenarios. For example, in the Chinese disease normalization dataset CHIP-CDN, there are 40472 diseases to classify, but only data of 3505 diseases (i.e. less than 10% of all diseases) are provided in the training set. Figure <ref> illustrates the data scarcity problem in CHIP-CDN dataset. Third, semantics density. The length of disease names is usually short, which makes every character carries huge semantic information. The meanings of the diseases are very different from each other even if they share a lot of common characters, and a single change in characters could result in dramatic change in semantic meaning. For instance, "髂总动脉夹层(Common iliac artery dissection)" and "劲总动脉夹层(Common carotid artery dissection)" are only different in one character, but the positions of those diseases are very distinct, from the upper half of the body part to the lower half. Among all the challenges we discussed, data scarcity is the biggest one, since other problems usually can be solved by providing larger datasets for models to learn. A common way to address the data scarcity problem is through data augmentation. There are numerous data augmentation methods for general corpora such as synonym replacement or back translation. <cit.> has shown that simple text data augmentation methods can be effective for text classification problems. However, because of the unique structure of disease names (i.e. semantics density), general text data augmentation methods do not work well on them, and sometimes even hurt the overall performance. For example, if random deletion <cit.> is performed on disease "阻塞性睡眠呼吸暂停(Obstructive Sleep Apnoea)" and results in "阻塞性睡眠(Obstructive Sleep)", that would dramatically change the meaning of that disease name and makes it become another disease. Admittedly, general data augmentation methods may be able to address the challenge of different writing styles, as performing random operations on texts can be seen as a way to emulate different writing behaviors. However, due to the above reasons, general data augmentation methods tend to hurt performance, which is demonstrated in our experiments. Therefore, designing data augmentation methods specific to disease corpus is necessary. To bridge this gap, we propose a set of disease-oriented data augmentation methods to address this problem. As with other disease-related tasks, disease normalization can be thought as a process of text matching, from clinical names to standard names in ICD coding. Therefore, the key to this task is for the model to learn great encoding that contains enough similar information for each disease. For instance, the model needs to tell that "左肾发育不全(Left renal agenesis)" and "先天性肾发育不全(Congenital renal agenesis)" are the same disease while "髂总动脉夹层(Common iliac artery dissection)" and "颈总动脉夹层(Common carotid artery dissection)" are not, despite that they both share a lot of common characters. Our methods are based on the following two assumptions. First, disease names have the property of structural invariance. A disease name consists of several different types of key elements, such as location, clinical manifestations, etiology, pathology, etc. In the pair of clinical disease and standard ICD disease, the specified elements can correspond in most cases. Therefore, we can replace a specific element between the pair of clinical disease and standard ICD disease at the same time to generate new pairs. The matching relationship of the newly generated clinical disease and the ICD standard disease pairs can still be maintained. We screened the generated standard ICD diseases to ensure that they belonged to the correct label and that the pairs are effective. It should be noticed that replacing components could derive a new clinical disease name that turns out to be fake (i.e. the disease actually does not exist), but the key point here is to make models learn the necessary semantic association within the diseases. Second, labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse, e.g. a yellow chair is also a chair. In the ICD coding system, there are also different and clear granularities of diseases. Therefore, we can treat the fine-grained disease as their coarse-grained upper disease by assigning them father labels. Normally, a data augmentation method generates new data and trains them along with the existing data, without altering the training paradigm. However, the disease normalization task assigns each disease a unique label, while our methods augment the labels. Therefore, if the traditional training paradigm is still applied to our augmentation methods, a same input disease in the dataset may get different labels, which will make the model difficult to train due to label confusion. To overcome this problem, we treat the data augmentation operation as a pre-training task (we call it augmented training) prior to the original task, so that the model can first learn the necessary semantic information within diseases and then leverage that information when fine-tuning on the actual normalization dataset. Additionally, both unnormalized disease names from the tasks and standard ICD names of the diseases can be used as inputs in the data augmentation process. A unique advantage of using standard ICD names to perform data augmentation as a pre-training task is that the model can get the whole picture of the disease-related information from ICD coding, which includes all classes of diseases, even before the actual training of the downstream task. Therefore, with all those information injected, the model can perform much stronger on smaller datasets where lots of class labels are not able to be seen in the training set. To the best of our knowledge, we are the first to explore the semantic components and information within disease names. We believe the research on disease name enhancement has high research value and can benefit various downstream tasks. To summarize our contributions: * We propose a set of data augmentation methods for the Chinese disease normalization tasks. * Experiments validate that general data augmentation methods have the potential to impair the disease normalization task. However, our method has obvious performance gain on the task based on various baseline models. * We also analyze the reasons why the proposed method is effective. § BACKGROUND ICD coding. ICD, the acronym of the International Classification of Diseases, is an international unified classification of diseases developed by the World Health Organization, and ICD-10 is the 10th version of ICD coding which is used in our work. The coding is a combination of letters and numbers, which classifies diseases according to their etiology, pathology, clinical manifestations, and anatomical locations, so that they form a hierarchical coding structure. ICD also adopts a multi-grain fashion where coarse-grained disease are followed by fine-grained diseases. Disease normalization task. In clinical practice, doctors will fill in the name of the disease according to clinical diagnosis standards along with their own writing habits, which makes a single disease name hundreds of versions. The disease normalization task is to match disease names written in different styles into a single standard name provided by ICD coding. After the disease normalization process, researchers can perform further operations upon the normalized names to realize all kinds of functions used in wise medical applications. The task can be formalized into the following operation: X -> Y, where X represents the clinical disease names and Y represents the standard ICD names. NER. NER stands for Named Entity Recognition, which is a common task in Natural Language Processing. It aims to identify entities that have practical values and their locations from unstructured texts. The classification of these entities may include persons, organizations, locations, etc. In this work, we use an NER tool trained by ourselves to identify elements in disease names in order to perform data augmentation. Additionally, we argue that any NER tool that can identify elements in disease names should be fine, and our work mainly focus on the data augmentation methods. § RELATED WORK In this section, we first introduce related works of data augmentation, then we introduce medical data-driven research works that are similar to ours. §.§ Data Augmentation Data augmentation is a technology to synthesize new data based on existing data as a way to expand the amount of dataset. It is often used when the amount of data is not enough, and it can also act as a regularizer to prevent the model from overfitting the training set. Unlike images, where it is relatively easy to augment data as well as keep the semantic information intact, data augmentation in texts is more difficult, due to its unstructured form <cit.>. Many works focus on augmentations directly on the input: <cit.> propose four simple augmentation methods base on character-level noise injection, which are replacement, insertion, swap, and deletion. Their methods are quite straightaway and effective, but the augmentation results may cause unwanted noise by not following the grammar rules. Back translation, augments data by translating the original text to a second language and then translating it back. This method can keep the semantic meaning well of the original text, but the augmented results are lack of diversity and sometimes restricted by the translation tool. In order to make the augmented data more realistic, <cit.> leverages lexicalized probabilistic context-free grammars to capture the intricate compositional structure of natural language and then perform word replacements. This method yields good results, but grammar-based methods for general text are difficult to generalize to specialized areas, such as medicine. There are also methods that leverage pre-trained language models to perform data augmentation. <cit.> use MLM objective in BERT <cit.> to mask out some words and then regenerate it. <cit.> also uses MLM task as well as changing the segment ids to class labels. <cit.> compares three kinds of data augmentation methods using a conditional pre-trained model, namely auto-encoder, auto-regressive, and seq2seq. A problem with these methods is that the semantic meaning of the original sentence may change after several MLM replacements. Semi-supervised learning can also be a way to perform data augmentation by leveraging the vast amount of unlabeled data. <cit.> uses MixUp to guess the low-entropy labels of the augmented data and then mixes the labeled and unlabeled data to derive a loss term, and <cit.> performs data augmentation on unlabeled data for consistency training. However, we only focus on augmenting the data itself instead of semi-supervised learning objectives in this work. §.§ Data approaches on medical data While most researches focus on the effect of data augmentation on general text data, there are also works that try to explore the possibility of data augmentation operations on medical text data. In this section, we mainly introduce data augmentation on medical text data and other related research works. There are works that focus on the synonym replacement in medical terms. <cit.> and <cit.> leverage Unified Medical Language System (UMLS) to find medical synonyms to perform replacements after certain medical terms are identified in classification texts. Focusing on the ICD-coding task, <cit.> also replaces both the medical terms in raw texts and the classification label to get new training data. While their works mainly focus on replacing the whole medical term, we investigate the possibility of replacing the components of the medical terms by exploring the semantic structures within them. Additionally, <cit.> investigates the performance of EDA, conditional pre-trained language models and back translation to perform data augmentation on social media texts for mental health classification. <cit.> proposes Segment Reordering as a data augmentation technique to keep the medical semantic meaning intact. <cit.> use pre-trained language models fine-tuned on General Semantic Textual Similarity (STS-G) data to generate pseudo-labels on medical STS data, and then perform iterative training. § METHODS In this section, we introduce the details of our proposed data augmentation methods and the overall pipeline. Since the significance of data augmentation is to inject the model with extra knowledge, the key point is to explore the components and relations in diseases so that the model can have a broad sense of the internal structures of the diseases. Therefore, we leverage the multi-axis and multi-grain nature of the diseases to design all of the data augmentation methods. First of all, the disease names are composed of several elements, which include but are not limited to etiology, pathology, clinical manifestations, anatomical location, chronicity, degree type, etc. For ease of expression, we merge and select from all those elements into three main categories, which are disease center, anatomical location and disease quality. This shows the multi-axis nature of the diseases. * Disease Center: Disease center, which may include etiology and pathology, is the minimal word that describes the nature of a disease. It defines the main category of a disease, such as "disorders" for "Other disorders of the eye with mcc". * Anatomical Location: Anatomical Location is a part of the human body that have actual meaning in anatomy. It indicates which part of the human body is ill. * Disease Quality: The quality of a disease which indicates the subtype of the disease, such as "Drug-induced" for "Drug-induced peripheral neuropathy". With these three axis words, all kinds of disease names can be combined by them. Second, a disease can be described by multiple granularities. An upper disease is a coarse-defined disease and a lower disease is a fine-grained disease. The ICD coding contains lots of upper-lower disease pairs by assigning them different lengths of code. For example, in "ICD-10 Beijing Clinical Version 601", the disease name of code "A18.2" is "外周结核性淋巴结炎(Peripheral Tuberculous Lymphadenitis)" and "A18.201" is "腹股沟淋巴结结核(Inguinal lymph node tuberculosis)". "Peripheral Tuberculous Lymphadenitis" is a coarse-defined disease due to not specifying a single anatomical location. Additionally, a coarse-defined disease can contain multiple fine-grained diseases in ICD coding. In our intuition, although the disease can only be called the same if all of its components are the same, it is necessary for the model to learn which diseases are more similar than others. Therefore, we define the following data augmentation methods. §.§ Data Augmentation We perform data augmentation by assigning pseudo-labels to diseases to describe their relationships so that they can form a new pair of diseases, and we use those pairs to perform augmented training of disease normalization tasks. We divide our methods into two main categories: Axis-word Replacement and Multi-grain Aggregation. We call our proposed disease name data augmentation method DDA. Figure <ref> illustrates the overall pipeline of our methods. Axis-word Replacement (AR): We assume that disease names have the property of structural invariance, which means a name derived by replacing an axis-word in a disease to another one with the same type also makes sense. Since there are often matches of Axis-words between an unnormalized-standard disease pair in the disease normalization task, replacing the corresponding Axis-word in the clinical name with the standard name in the pair at the same time can ensure that the newly-generated pair will still match. To locate all axis-word in the disease, we leverage a Named Entity Recognition (NER) tool trained by ourselves[We will open source the code of our experiment along with the NER tool for disease names on Github.]. The entity type includes but is not limited to disease center, anatomical location, and disease quality. We note that the NER tool is just for the use of locating axis-words, and it can be replaced by any modules that can achieve the same function. We leverage both the ICD-coding and the disease normalization training set to perform axis-word replacement. The detailed descriptions of each category of axis-word replacements are as follows: * AR1: AR1 is illustrated in the top left corner of Figure <ref>. First, select a pair of diseases (disease A and disease B) that shares one or more axis (part1 in figure) but is different in another axis (part 2 in figure). Then, replace the part 2 in disease A to be the same part2 in disease B. (Note: disease A can be chosen from any sources, but disease B can only be chosen from the standard ICD-coding list as it serves as the label of a disease normalization pair.) * AR1-posotion: Perform AR1 by fixing the disease center and replacing the anatomical location. * AR1-center: Perform AR1 by fixing the anatomical location and replacing the disease center. * AR1-quality: Perform AR1 by fixing both the disease center and the anatomical location and replacing the disease quality. * AR2: AR2 is illustrated in the top right corner of Figure <ref>. First, select a pair of unnormalized-standard diseases from the disease normalization training set. Let the unnormalized disease be disease A, and the standard disease be disease B. Then, find disease C from ICD-coding list that shares one or more axis (part1) but is different in another axis (part2). Finally, replace part2 in disease A to be the same part2 in disease C, so that the replaced disease A and disease C can form a new disease normalization pair. * AR2-position: Perform AR2 by fixing the disease center and replacing the anatomical location. * AR2-center: Perform AR2 by fixing the anatomical location and replacing the disease center. * AR2-quality: Perform AR2 by fixing both the disease center and the anatomical location and replacing the disease quality. Multi-Grain Aggregation (MGA): We assume that labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse. In the ICD coding system, there are also clear granularities of diseases. The maximum length of code that can be shared between hospitals is 6, and the multi-grain structure contains 3-digit, 4-digit, and 6-digit codes. We observe that the semantic meaning between diseases that share the first 3-digit code but are different in the 4th-digit code can be quite different, but the meaning would be a lot similar if the diseases share the first 4-digit code. Therefore, We implement MGA augmentation using the following method. * MGA-code: we leverage the multi-grain nature of the ICD coding by assigning the label of a 6-digit disease to its corresponding 4-digit disease. We call the method "aggregation" because normally a 4-digit disease can be matched to several 6-digit diseases, so the model can learn which diseases are similar. MGA-code is illustrated in the left bottom of Figure <ref>. * MGA-code1: The 6-digit diseases are directly derived from the ICD-coding list. * MGA-code2: The 6-digit diseases are derived from the diseases in CHIP-CDN training set whose labels are a 6-digit ICD disease. * MGA-position: Apart from the ICD coding, anatomical locations also follow a hierarchical structure, where several smaller positions can be grouped together to form a larger position. Thus, we search for diseases in ICD coding that share the same center and one position is the upper position of another one, and we grouped the classification labels of the lower position diseases to their upper position diseases. MGA-position is illustrated in the right bottom of Figure <ref>. (Note: the upper position diseases must come from the standard ICD-coding list.) * MGA-position1: The lower position diseases are directly derived from the ICD-coding list. * MGA-position2: The lower position diseases are derived from the diseases in CHIP-CDN training set. (Note: In the human body, we call a location the upper position to another position if that location covers a larger area than another. In order to find the upper or lower positions of a position, we construct a position tree document where the anatomical positions in the human body are organized into a tree data structure. We use the constructed position tree to recognize the upper and lower relations above. The same goal can be achieved with other sources containing knowledge bases of human anatomy.) §.§ Training Process * Taking the augmented data to train the disease normalization task. * Fine-tuning the original disease normalization dataset. § EXPERIMENTS §.§ Dataset We evaluate the effectiveness of our data augmentation methods on a Chinese disease normalization dataset called CHIP-CDN. CHIP-CDN originates in the CHIP-2019 competition and was collected in A Chinese Biomedical Language Understanding Evaluation Benchmark called CBLUE <cit.>. The dataset contains 6000 unnormalized-standard disease pairs in the training set, 1000 pairs in the dev set, and 2000 pairs in the test set. §.§ Experimental Setup We evaluate our methods on three baselines: BILSTM <cit.> and BERT-base <cit.>, CDN-Baseline(from CBLUE)<cit.>. For BILSTM, we use two BILSTM layers followed by a MLP layer to perform classification. For BERT-based models, we use the CLS vector to perform classification. For CDN-Baseline, we use the original model provided by its git repository[https://github.com/CBLUEbenchmark/CBLUE], which follows a "recall-match" two step training approach based on pre-trained language models. The choose of the baseline models is to demonstrate the effectiveness of our method under different types of models and training settings. In specific, we verify the effectiveness of DDA to a train-from-scratch model using a BILSTM model, we verify the effectiveness to models with pre-trained knowledge using the BERT-base model, and we verify the effectiveness to complex models using CDN-Baseline model. For the BILSTM model and BERT-base model, we use accuracy to judge the model performance. In our evaluation, we treat this disease normalization as a multi-class classification rather than multi-label classification task despite that there are few data samples that a single unnormalized disease is matched to several standard diseases. Hence, if an unnormalized disease is matched to several standard diseases, this data sample is considered correctly predicted as long as one of the standard diseases is correctly predicted. We design the experiments in this way to simplify the model as much as possible to more clearly illustrate the effectiveness of DDA. For CDN-Baseline, we stick to the settings in CBLUE <cit.>, which use the F1 as the evaluation metric, use BERT-base as the baseline model, and use the two step training paradigm provided by CBLUE for better comparison. To ensure fairness, we use the exact same parameter settings for the same model. In particular, for CDN-Baseline, we use almost the same parameter settings as CBLUE's git repository, including random seed numbers. Additionally, we use devset for performance comparison, since the label of test set of the CHIP-CDN dataset is not given. For all experiments, we keep the best performing result as the final score. §.§ Results The results are shown in Table <ref>. The trainset in the table represents CHIP-CDN training set. From top to bottom, the performance of different models using different data augmentation methods is represented. Among them, BT is the back-translation data augment method[we use the youdao translation tool and the URL is https://fanyi.youdao.com/.], and DDA is the semantic-based disease name data augmentation method proposed by us. The experimental results demonstrate that although EDA and back-translation increase diversity, they both hurt performances in some settings (especially for EDA). However, DDA improves the performance in every settings. Clearly, DDA avoids the problem of EDA, and its effect is much better than BT. We observe that the performances improve for all models above after applying the DDA methods, showing the effectiveness of our proposed methods. For the BILSTM model, the relative performance improvement reaches 6%. We further observe that there is more performance gain on BILSTM than BERT-based models and CDN-Baseline, probably because the knowledge in pre-trained language models has already covered some of the similar information, but our proposed method can further improve their performance, showing the effectiveness of DDA. §.§ Ablation Study In this section, we evaluate the effectiveness of every data augmentation methods on BILSTM, BERT-base models and CDN-Baseline. As we propose two types of data augmentation methods, we evaluate them by taking out these methods one by one to see the resulting performances. The results are shown in Table <ref>. We observe that removing data generated by either types of methods would lead to performance degradation, thus proving the effectiveness of every method that we propose. §.§ Smaller datasets experiments We also evaluate the performance improvements over smaller datasets that derived from CHIP-CDN since the data scarcity problem is more severe in smaller datasets. We evaluate the training set whose sizes range from 5%, to 100% of the CHIP-CDN training set size. For the convenience of training, for augmented training in this setting, we only leverage standard disease names in ICD-coding. No data from disease normalization training set are used. We draw curves to illustrate the comparison on whether to use our proposed methods or not, which is shown in figure <ref>. When the size of the training set increase, both curves steadily improve. We also notice that the performance gain is higher when the size of the training set is smaller. § CONCLUSION In this paper, we propose two main types of data augmentation methods for Chinese disease normalization tasks based on two hypothesis respectively, where the disease names have the property of structural invariance, and the labels in disease normalization task have the transitivity properties. Our data augmentation methods explore the semantic information and the relation information in diseases, and are adopted in augmented training fashion to avoid introducing misinformation. Experimental results show that our DDA method can better solve the three main challenges in disease normalization task, namely description diversity, data scarcity, and semantics density. Compared to EDA and back-translation methods, our method has obvious advantages on the disease normalization task. Furthermore, we prove that our data augmentation methods work even better on smaller datasets. unsrtnat § APPENDIX §.§ data augment result statics The table <ref> is all the statistical results of the data we obtained using MGA and AR data augmentation methods [We will open source the augmentation code and the augmented result on Github.]. §.§ Hyperparameter settings Table <ref> shows the hyperparameter settings of our choices. For different methods, the way of parameter setting is different. For models that use word2vec initialization or random initialization parameters, the training on augmented data can be regarded as a special pre-training task, and a large learning rate and a large number of iterations can be set to make the training sufficient. For models that use a pre-trained model (i.e. BERT) as the backbone, a small learning rate and a small number of training iterations should be set to avoid the catastrophic forgetting of valuable information in the pre-trained models. For each baseline model, we first train on the augmented dataset (Augmented Training), and then fine-tune on CHIP-CDN dataset. For the CDN-Baseline model, we use Chinese-bert-wwm as the pre-training model, and the training method is provided by CBLUE. For the DDA method, we first use the augmented dataset to train for 1 epoch with a learning rate of 5e-6 and then fine-tune on CHIP-CDN. The hyperparameter of the num_negative_sample is 3+3 and the recall_k is 2 (The explanation of hyperparameter num_negative_sample and recall_k can be found in their github repository). §.§ Analysis In table <ref>, the first row represents the distribution of the number of times the label appears in the training set. The other two rows represent the label distribution of the two types of the augmented data. The statistical result shows that the data of DDA can effectively improve the labels that appear less frequently (the number of times < 3) and the labels that do not appear at all (did not appear) in the training set. This is beneficial for addressing the data scarcity problem of disease normalization tasks and the diversity of disease names. This is the direct reason why DDA works. As for EDA and BT, they can only increase the number of labels that are already appeared in the training set, which only solve the problem of expression diversity. Hence, their abilities are limited. §.§ Case Study We give a real example of the augmentation results generated by different data augmentation methods. We observe that the semantic meaning of the EDA-generated result dramatically changes due to the property of semantics density, and it changes the key information within the disease by losing the anatomical location. The results generated by BT is more realistic, but this method cannot generate samples beyond the original label scope, and it also suffers from the restrictions of the translation tools. As for our proposed method DDA (last two lines in the table), it can not only increase the diversity of the input, but also generates data where their labels are never appeared in the training set, so that sparse labels can be trained more thoroughly. §.§ Future work So far, we have only proved the effectiveness of our DDA method, but no experimental analysis is done to explore the internal mechanisms of why it is so effective. Moreover, to further avoid the injection of misinformation, we believe designing loss function terms to effectively select more valuable data from the data augmentation results can be a promising direction. We aim to perform researches on those topics in the future.
http://arxiv.org/abs/2306.02784v1
20230605112314
Tight Big-Ms for Optimal Transmission Switching
[ "Salvador Pineda", "Juan Miguel Morales", "Álvaro Porras", "Concepción Domínguez" ]
math.OC
[ "math.OC" ]
Tight Big-Ms for Optimal Transmission Switching Salvador Pineda, Juan Miguel Morales, Álvaro Porras, Concepción Domínguez OASYS research group, University of Málaga, Spain {spineda, juan.morales, alvaroporras, concepcion.dominguez}@uma.es ====================================================================================================================================================================================================== This paper addresses the Optimal Transmission Switching (OTS) problem in electricity networks, which aims to find an optimal power grid topology that minimizes system operation costs while satisfying physical and operational constraints. Existing methods typically convert the OTS problem into a Mixed-Integer Linear Program (MILP) using big-M constants. However, the computational performance of these approaches relies significantly on the tightness of these big-Ms. In this paper, we propose an iterative tightening strategy to strengthen the big-Ms by efficiently solving a series of bounding problems that account for the economics of the OTS objective function through an upper-bound on the generating cost. We also discuss how the performance of the proposed tightening strategy is enhanced if reduced line capacities are considered. Using the 118-bus test system we demonstrate that the proposed methodology outperforms existing approaches, offering tighter bounds and significantly reducing the computational burden of the OTS problem. Big-M tightening, Bounding problem, Mixed-integer optimization, Optimal transmission switching, Topology control. This work was supported in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement No. 755705), in part by the Spanish Ministry of Science and Innovation (AEI/10.13039/501100011033) through project PID2020-115460GB-I00. Á. Porras is also financially supported by the Spanish Ministry of Science, Innovation and Universities through the University Teacher Training Program with fellowship number FPU19/03053. Finally, the authors thankfully acknowledge the computer resources, technical expertise, and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of Málaga. § INTRODUCTION Traditionally, transmission lines in electricity networks have been regarded as infrastructure devices that cannot be controlled, except during instances of outages or maintenance. More recently, the possibility of flexibly exploiting the topological configuration of the grid was first suggested in <cit.> and later formalized in <cit.> into what we know today as the Optimal Transmission Switching (OTS) problem. The optimal transmission switching problem refers to the task of determining the most efficient configuration of transmission lines in a power system to achieve certain objectives. It involves deciding which transmission lines should be open or closed to optimize system performance in terms of factors such as minimizing transmission losses, voltage deviations, or congestion. Even if the power flow equations are simplified using the well-known direct current (DC) linear approximation of the power flow equations, the resulting formulation of the OTS problem, known as DC-OTS, takes the form of a mixed-integer program, which has been proven to be NP-hard for general network classes <cit.>. Up until now, the resolution of the DC-OTS has been addressed using two different methodological approaches. These approaches can be categorized as exact methods and heuristics. The exact methods utilize techniques derived from mixed-integer programming, such as bounding and generating valid cuts. These methods aim to solve the DC-OTS with (certified) global optimality, ensuring the best possible solution. On the other hand, heuristics aim to rapidly identify good solutions for the problem, potentially sacrificing optimality or even suggesting infeasible grid configurations. Several heuristic methods have been proposed in the technical literature to reduce the computational time in solving the OTS problem. Some of these methods focus on decreasing the number of lines that can be switched off <cit.>. Other approaches maintain the original set of switchable lines but determine their on/off status using greedy algorithms <cit.>. In contrast, the authors of <cit.> propose a parallel approach where heuristics generate promising candidate solutions to expedite traditional MIP algorithms in solving the OTS problem. Furthermore, certain data-based heuristic methods utilize information from past OTS problems to improve efficiency. For example, the authors of <cit.> employ a K-nearest neighbor strategy to significantly reduce the search space of the integer solution for the DC-OTS problem. Similarly, references <cit.> present more advanced methodologies involving neural networks to learn the optimal status of switchable lines. Within the exact methods, notable contributions can be found in references <cit.>, <cit.>, and <cit.>. In particular, the authors of <cit.> present a cycle-based formulation for the DC-OTS problem, which yields a mixed-integer linear program. They also introduce sets of strong valid inequalities for a relaxed version of their formulation that can be efficiently separated. In <cit.>, the authors focus on the mixed-integer linear formulation of the DC-OTS, utilizing a big-M approach to handle the disjunctive relationship between the power flow of switchable lines and the voltage angle differences. They prove the NP-hardness of determining valid big-Ms and propose a methodology for setting the appropriate values. Lastly, the authors of <cit.> develop a family of cutting planes specifically tailored for the DC-OTS problem. This paper introduces a new exact methodology to address the DC-OTS problem, making significant contributions to the existing state-of-the-art. Our approach determines suitable values for the big-M constants used in the mixed-integer reformulation of the DC-OTS by solving the so-called bounding problems. To obtain tighter big-M values, we impose an upper-bound on the generating cost in these bounding problems. We also investigate the synergistic effect between the big-M tightening and the reduction of the line capacities. The performance of our methodology is then compared to the approach proposed in <cit.> to determine big-M constants for the 118-bus test system. In summary, the key contributions of this work can be summarized as follows: - We propose a set of bounding problems to efficiently compute tight big-M values to be used in the optimal transmission switching problem. Besides, we enhance the performance of the bounding problems imposing a valid upper-bound on the total generating cost. - We extend the use of the bounding problem to also compute the maximum feasible power flow through the transmission lines. We demonstrate that using these reduced capacities decrease even further the big-M values. - We use a 118-bus test system to prove that the proposed bound tightening methodology clearly outperforms state-of-the-art approaches and general purpose methods of optimization solvers in terms of the computational burden required to solve the OTS problem. The remainder of this paper is structured as follows. Section <ref> introduces the original formulation of the DC-OTS problem, its reformulation as a mixed-integer linear program, and the existing methodologies to compute the required big-M constants. The proposed cost-driven bound tightening approach is presented in Section <ref>, which concludes with the comparison procedure used to assess its performance. Section <ref> discusses the computational results obtained for the 118-bus test system. Finally, conclusions are drawn in Section <ref>. § OPTIMAL TRANSMISSION SWITCHING In this section we introduce the standard and well-known formulation of the Direct Current Optimal Transmission Switching problem (DC-OTS). Consider a power network consisting of a set of nodes 𝒩 and transmission lines ℒ. For simplicity, we assume that there is one generator and one power load per node n ∈𝒩. Let p_n and d_n denote the power dispatch of the generator and the power consumed by the power load, respectively. Each generator is characterized by a minimum and maximum power output, p_n and p_n, and a marginal production cost c_n. We represent the power flow through the line (n,m) ∈ℒ connecting nodes n and m by f_nm. As customary, f_nm>0 represents a power flow from node n to node m, and f_nm<0 a power flow in the opposite direction. The maximum power flow from node n to node m is denoted by F_nm, and the power flow in the opposite direction is limited by F_mn. The maximum flow through a line is constrained by thermodynamics limitations and therefore, physical line capacities are always symmetric, i.e., F_nm=F_mn. However, depending on the location of generators and loads in a network, the maximum feasible flows through a line may be different in each direction, which is the reason why we consider the more general case of asymmetric line capacities. Besides, the set of transmission lines that can be switched on/off is denoted by ℒ_S ⊆ℒ. If the line (n,m) ∈ℒ_S, the binary variable x_nm determines its status, being equal to 1 if the line is fully operational, and 0 when disconnected. Using the DC approximation of the network equations, the flow f_nm through an operational line is given by the product of the susceptance of the line, b_nm, and the difference of the voltage angles at nodes n and m, i.e., θ_n-θ_m. We use bold symbols to define the vectors of variables 𝐩=[p_n, n ∈𝒩], θ=[θ_n, n ∈𝒩], 𝐟=[f_nm, (n,m) ∈ℒ], and 𝐱=[x_nm, (n,m) ∈ℒ_𝒮]. With this notation in place, the DC-OTS problem can be formulated as follows: min_p_n,f_nm,θ_n,x_nm ∑_n c_n p_n s.t. f_nm = x_nmb_nm(θ_n-θ_m), ∀ (n,m) ∈ℒ_S f_nm = b_nm(θ_n-θ_m), ∀ (n,m) ∈ℒ∖ℒ_S ∑_m:(n,m)∈ℒf_nm - ∑_m:(m,n)∈ℒf_mn = p_n - d_n, ∀ n ∈𝒩 p_n ≤ p_n ≤p_n, ∀ n ∈𝒩 -x_nmF_mn≤ f_nm≤ x_nmF_nm, ∀ (n,m) ∈ℒ_S -F_mn≤ f_nm≤ F_nm, ∀ (n,m) ∈ℒ∖ℒ_S θ_1 = 0 x_nm∈{0,1}, ∀ (n,m) ∈ℒ_S The objective function (<ref>) minimizes the total electricity generation cost. The power flow through transmission lines is defined in (<ref>) and (<ref>). In the case of a switchable line, constraint (<ref>) includes the binary variable x_nm to enforce this relationship only when the line is in service. Naturally, x_nm = 0 implies that f_nm = 0. The nodal power balance equation is ensured by (<ref>), while constraints (<ref>) impose that the power output of generating units must lie within the interval [p_n,p_n]. Constraints (<ref>) and (<ref>) limit the maximum power flow through switchable and non-switchable lines, respectively. Equation (<ref>) arbitrarily sets one of the voltage angles to zero, while the binary character of variables x_nm is imposed by constraint (<ref>). Problem (<ref>) is a mixed-integer nonlinear programming problem due to the product x_nm(θ_n-θ_m) in (<ref>). Even when the power network includes a connected subgraph of non-switchable lines, this problem has been proven to be NP-hard <cit.>. However, constraint (<ref>) can be linearized by introducing a pair of large enough constants M_nm, M_mn per switchable line <cit.>. By doing so, equation (<ref>) can be replaced by the two following inequalities: f_nm≥ b_nm(θ_n-θ_m)-M_nm(1-x_nm) f_nm≤ b_nm(θ_n-θ_m)+M_mn(1-x_nm) where the large constants M_nm, M_mn are guaranteed to be upper bounds of b_nm(θ_n-θ_m) and b_nm(θ_m-θ_n), respectively, when the line (n,m) is disconnected (x_nm = 0). Under that assumption, the DC-OTS is reformulated as the following mixed-integer linear programming problem min_p_n,f_nm,θ_n,x_nm ∑_n c_n p_n s.t. (<ref>)-(<ref>), (<ref>) Although model (<ref>) can be solved using off-the-shelf mixed-integer optimization solvers, such as Gurobi <cit.>, the choice of the bounds M_nm, M_mn is of utmost importance. If these bounds are too loose, the relaxations performed throughout the branch-and-bound or branch-and-cut algorithms are too poor, and the total computational burden is expected to increase significantly. In all existing works that reformulate the DC-OTS problem as a mixed-integer program, these large enough constants are assumed to be symmetric, i.e., M_nm=M_mn. The review paper <cit.> collects in Table 1 a summary of all proposed symmetric big-M values used in the technical literature. In particular, the authors of <cit.> propose a method to compute big-M values based on the shortest and longest paths between two nodes. This methodology has been recently revisited in <cit.>, where the authors argue that the lowest possible value of these bounds denoted by M^ OPT_nm and M^ OPT_mn can be obtained by solving the following bounding problems M^ OPT_nm:=b_nm×(<ref>)-(<ref>) ∩ 𝒳_nmmax (θ_n-θ_m) M^ OPT_mn:=b_nm×(<ref>)-(<ref>) ∩ 𝒳_nmmax (θ_m-θ_n) where 𝒳_nm:= {𝐱∈𝔹^|ℒ_𝒮|: x_nm = 0} imposes that the binary variable associated with the switchable line (n,m) is equal to 0. As illustrated in <cit.>, problem (<ref>) can be unbounded in power systems where switching off lines can result in isolated subnetworks. However, due to reliability and security standards, islanding in power grids is to be avoided in general and therefore, we assume that the set of switchable lines ℒ_𝒮 is such that the connectivity of the whole power network is always guaranteed by means of a spanning subgraph. The authors in <cit.> also show that, even when M^ OPT_nm is finite, computing it is as hard as solving the original DC-OTS problem. Therefore, they propose an efficient methodology to find other valid bounds for (<ref>) as follows: [left=(𝒮𝒫) ]align M_nm = b_nm ∑_(i,j)∈SP_nmF_ij/b_ij, ∀(n,m) ∈ℒ_𝒮 M_mn = b_nm ∑_(i,j)∈SP_mnF_ij/b_ij, ∀(n,m) ∈ℒ_𝒮 where SP_nm is the shortest path from n to m, and SP_mn the shortest path from m to n. These shortest paths are determined on a directed graph with edge costs c_nm=F_nm/b_nm and c_mn=F_mn/b_nm for the lines that belong to the connected spanning subgraph, and c_nm=c_mn=∞ for the switchable lines. These shortest paths can be efficiently computed using Dijkstra's algorithm <cit.>. In reference <cit.>, line capacities are assumed symmetric and therefore, the big-M values computed by (<ref>) are also symmetric, that is, M_nm=M_mn. For given line capacities 𝐅, using equations (<ref>) to obtain the bounds 𝐌 for all switchable lines is denoted as 𝐌 = 𝒮𝒫(𝐅). Among the references reviewed in <cit.>, the methodology proposed in <cit.> is the one that leads to tighter big-M values and therefore, this approach is used here as a benchmark. In the next section we propose a novel methodology to compute valid bounds that are tighter than those described in <cit.> and therefore reduce the computational burden of solving model (<ref>). Conversely to all existing methodologies, the one we propose in this paper allows us to compute asymmetric big-M values that yield tighter mixed-integer reformulations of the DC-OTS problem. § BOUND TIGHTENING METHODOLOGY §.§ Big-M tightening The methodology proposed in this paper to find the values of the large constants M_nm, M_mn is based on the following relaxations of problems (<ref>) M_nm=b_nm×ℛ(𝐅,𝐌) ∩ 𝒳^0_nmmax (θ_n-θ_m) M_mn=b_nm×ℛ(𝐅,𝐌) ∩ 𝒳^0_nmmax (θ_m-θ_n) where the feasible region defined by (<ref>)-(<ref>) is replaced by the set ℛ(𝐅,𝐌):= {( 𝐩, θ, 𝐟, 𝐱) ∈ℝ^2|𝒩| +|ℒ| + |ℒ_𝒮|: (<ref>)-(<ref>), (<ref>)} based on the linearization (<ref>). Note that the feasible region ℛ depends on the parameter vectors 𝐅=[(F_nm,F_mn), (n,m) ∈ℒ] and 𝐌=[(M_nm,M_mn), (n,m) ∈ℒ_S]. Besides, the set 𝒳^0_nm:= {𝐱∈ℝ^|ℒ_𝒮|: 0≤𝐱≤1, x_nm = 0} is a relaxation of the set 𝒳_nm in which variables 𝐱 can take any continuous value between 0 and 1. For valid bound values 𝐌, it is guaranteed that 𝒳_nm⊂𝒳^0_nm and therefore, M_nm≥ M^ OPT_nm and M_mn≥ M^ OPT_mn. Besides, since optimization problems in (<ref>) are linear, the proposed methodology to find valid bounds for inequality constraints (<ref>) is computationally efficient. For the remaining of this paper, we denote problems (<ref>) as bounding problems <cit.>. Although bounding problems (<ref>) are easy to solve, the proposed relaxation can yield too loose bounds such that M_nm≫ M^ OPT_nm and/or M_mn≫ M^ OPT_mn and therefore, the computational burden of solving (<ref>) using these bounds can still be substantial. To avoid this issue, we include additional constraints to the bounding problems (<ref>) so that the obtained big-M values are as tight as possible. In reference <cit.> the authors use a constraint on the generation cost of the network-constrained unit commitment problem to efficiently remove inactive constraints of the optimization model. Inspired by this idea, one may wonder whether it is necessary to choose big-M values that guarantee the feasibility of all integer solutions, or whether it could be more effective if these bounds were tuned to also remove some feasible but suboptimal integer solutions. For the sake of intuition, let us assume that the solution of problem (<ref>) for a given switchable line indicates that the maximum angle difference is reached when the most expensive generators are producing at maximum capacity and the cheapest units are not generating anything. Most likely, the dispatch that maximizes this angle difference is much more expensive than that obtained by the the DC-OTS problem and therefore, the actual angle difference at the optimal solution of (<ref>) is probably much lower than that computed by (<ref>). Accordingly, we define in this paper the set 𝒞:= {𝐩∈ℝ^|𝒩|: ∑_n c_n p_n ≤C}, where C is an upper bound on the optimal generation cost of the DC-OTS problem. The bounding problems that consider an upper bound on the production cost are then formulated as follows: [left=(ℬℳ) ]align M_nm=b_nm ×ℛ(𝐅,𝐌) ∩ 𝒳^0_nm∩ 𝒞max (θ_n-θ_m) M_mn=b_nm ×ℛ(𝐅,𝐌) ∩ 𝒳^0_nm∩ 𝒞max (θ_m-θ_n) Since the feasible regions of bounding problems (<ref>) are contained in the feasible regions of (<ref>), we can guarantee that the obtained bounds are tighter than those determined in (<ref>). Obviously, the tighter the value of the upper-bound cost C, the smaller the feasible regions of problems (<ref>). This implies lower big-M values and the consequent reduction of the computational burden of problem (<ref>). For given line capacities 𝐅, big-M values 𝐌 and upper-bound cost C, using the bounding problems (<ref>) to update the big-M values for all switchable lines is denoted as 𝐌 = ℬℳ(𝐅, 𝐌, C). Importantly, while the bounds computed by (<ref>) according to the method proposed in <cit.> are symmetric, the big-M values obtained by the bounding problems (<ref>) are not symmetric in general. Another relevant point to consider is that a decrease of the big-M values associated with a specific switchable line has an impact on the feasible region ℛ of the bounding problems related to the other switchable lines. As a result, it may be necessary to solve the proposed bounding problems multiple times for the entire set of switchable lines. This approach ensures that with each iteration, the big-M values will consistently decrease and lead to more refined solutions through successive iterations. §.§ Line capacity tightening As discussed in Subsection <ref>, the feasible region of problems (<ref>) is reduced by imposing an upper bound on the optimal generation cost. Following this line of thought, the feasible region ℛ can also be shrunk by tightening the line capacities 𝐅. For instance, let us consider a given transmission line through which the power flow cannot exceed 100MW due to thermal limitations. However, given the location and capacity of the generating units, the demand location and variability, the network topology and parameters, and an upper bound on the generating cost, the power flow through that line is guaranteed to be always below 80MW. In such a case, we can tighten this line capacity with the following computational advantages. By reducing the capacities of the lines in the connected spanning subgraph, the big-Ms computed by (<ref>) also decrease. Besides, since constraint (<ref>) includes the product x_nmF_nm, reducing the capacity of switchable lines also makes model (<ref>) tighter. For these reasons, we also propose in this Section to compute the maximum feasible flows through all transmission lines as follows: [left=(ℬℒ) ]align F_nm=b_nm ×ℛ(𝐅,𝐌) ∩ 𝒳^1_nm∩ 𝒞max (θ_n-θ_m) F_mn=b_nm ×ℛ(𝐅,𝐌) ∩ 𝒳^1_nm∩ 𝒞max (θ_m-θ_n) where 𝒳^1_nm:= {𝐱∈ℝ^|ℒ_𝒮|: 0≤𝐱≤1} if (n,m) ∈ℒ∖ℒ_S, and 𝒳^1_nm:= {𝐱∈ℝ^|ℒ_𝒮|: 0≤𝐱≤1, x_nm = 1} if (n,m) ∈ℒ_S. Obviously, the maximum flows determined by (<ref>) are always lower than or equal to the original capacities determined by thermodynamic limitations. Updating the maximum power flows through all lines in the network using the bound problems (<ref>) is denoted as 𝐅 = ℬℒ(𝐅, 𝐌, C). Additionally, it is worth mentioning that the reduced line capacities computed in (<ref>) can also be used in (<ref>) to get tighter big-M values. §.§ Comparison procedure In summary, the method we propose in this paper starts by finding a tight upper-bound on the generating cost C. Then, the big-M values and the line capacities are iteratively tightened by solving the bounding problems (<ref>) and (<ref>), respectively. Finally, the reduced bounds are used to solve the mixed-integer formulation of the DC-OTS problem (<ref>). Algorithm <ref> summarizes the main steps of the proposed methodology. In order to investigate the improvements derived from the proposed methodology, we compare the computational performance of the following four variations of Algorithm <ref>: SO Shortest-path approach with original line capacities. This is the benchmark strategy proposed in <cit.> and only includes the initialization step of Algorithm <ref>. Thus, model (<ref>) is solved with 𝐅^0 and 𝐌^0. SR Shortest-path approach with reduced capacities. This is an improvement of the method proposed in <cit.> that uses the reduced line capacities obtained by the bounding problems (<ref>) and updates the big-M values using (<ref>). Thus, this approach runs steps 3ii) and 2i) in Algorithm <ref>, in that order, and ignores 2ii) and 3i). TO Tightening approach with original capacities. In this strategy we propose the big-M values are obtained by solving the bounding problems (<ref>) with the original line capacities in all iterations. Thus, this approach runs steps 2ii) and 3i) in Algorithm <ref>, and ignores 2i) and 3ii). TR Tightening approach with reduced capacities. This approach reduces the big-M values and the line capacities by solving the bounding problems and then, it is expected to yield the tightest bounds. This approach runs steps 2ii) and 3ii) in Algorithm <ref>, and ignores 2i) and 3i). Furthermore, in order to analyze the impact of the upper-bound cost on the proposed tightening methodology, we compare two different procedures to compute the maximum cost C to be used in the bounding problems: N Naive approach. This approach computes an upper bound on the cost by satisfying the total demand with the most expensive generators. By disregarding the network constraints, this cost is the solution to the linear problem: C = max_p_n ∑_n c_n p_n s.t. ∑_n p_n = ∑_n d_n Obviously, this upper bound on the optimal generating cost does not reduce the feasibility region of the bounding problems and is just considered here for benchmarking purposes. If model (<ref>) is used to compute the upper-bound cost, the method is denoted as XX-N, where XX represents the bound tightening procedure SO, SR, TO or TR. H Heuristic approach. The technical literature also proposes some heuristic approaches to solve the DC-OTS problem, like the greedy algorithm described in <cit.>. At each step, this algorithm disconnects one switchable line at a time, computes the resulting operating cost by solving an OPF linear problem, and fixes the status of the line that leads to the lowest cost to 0. The algorithm continues until disconnecting any remaining switchable line leads to a cost increase. Although this procedure does not lead to the optimal solution of the DC-OTS, its objective function can be close enough depending on each particular case. If this heuristic approach is used to compute the upper-bound cost, the method is denoted as XX-H. In the next section, we compare the performance of the four strategies described above for each upper-bound cost using different metrics. For instance, for each switchable line we compute the big-M range relative to that determined in <cit.> as follows: δ^M_nm = 100M_nm + M_mn/2M^0_nm where M^0_nm are the big-M values computed by (<ref>) with the original capacities. For instance, in the SO benchmark approach, δ^M_nm =100% for all switchable lines. In the remaining methods, δ^M_nm =80% means that the big-M range has been reduced a 20% for that particular switchable line. We can also compute an average value over all switchable lines as Δ^M = ∑_nm ∈ℒ_Sδ^M_nm/|ℒ_S| Similarly, we can define the relative range of the power flows through any transmission line as: δ^L_nm = 100F_nm + F_mn/2F^0_nm where F^0_nm is the original line capacity. The average value is computed as Δ^L = ∑_nm ∈ℒδ^L_nm/|ℒ| Apart from parameters Δ^M and Δ^L, we also compare the four strategies in terms of the computational burden required to solve model (<ref>) using the bounds obtained by Algorithm <ref>. § CASE STUDY This section provides an overview of the computational findings obtained from the various methodologies discussed in Section <ref> when applied to a practical network. Our focus is on comparing the different approaches using a realistic 118-bus network, which consists of 186 lines as documented in <cit.>. This network's scale is significant enough to pose challenges for current algorithms, yet it remains manageable in terms of computational complexity. Notably, this particular network serves as a widely employed benchmark for testing optimal transmission switching (OTS) solving strategies in existing literature <cit.>. As justified in Section <ref>, we consider a fixed connected spanning subgraph of 117 lines, while the remaining 69 lines can be switched on or off to minimize the operation cost. For this network, we generate 100 different instances of the OTS problem that differ on the nodal demand d_n and the spanning subgraph. The nodal demand is randomly sampled using independent uniform distributions in the range [0.9d_n, 1.1d_n], where d_n is the baseline demand. The spanning subgraph is also randomly chosen for each instance. All optimization problems have been solved using GUROBI 9.1.2 <cit.> on a Linux-based server with CPUs clocking at 2.6 GHz, 1 thread and 8 GB of RAM. In all cases, the optimality gap has been set to 0.01% and the time limit to 1 hour. All Gurobi options are set to default values. Before presenting the computational results of this case study we must clarify an implementation detail of Algorithm <ref>. In steps 2) and 3) of this algorithm the corresponding bounding problems can be solved in parallel, then reducing the final computational burden of the bound tightening procedure. However, the results of this section are obtained by solving all bounding problems sequentially and adding up the required time to solve each linear optimization model. By doing so, the comparison of the computational burden of the different models is more informative, especially if the number of iterations is high. Besides, this strategy allows a dynamic update of the line capacities and the big-M values. That is, the bounding problem corresponding to the k-th line can be solved using the updated line capacities of the (k-1)-th line, and so on. After that clarification, we start this numerical analysis by fixing the number of iterations (indicated in parenthesis for each method) to 1 and comparing the four methodologies described in Section <ref> combined with the two strategies to compute the upper-bound cost. Table <ref> collects, for each approach, the big-M and line capacity relative ranges and the computational time averaged over the 100 random instances considered. The reported computational time includes the time of solving all the bounding problems sequentially along with the resulting mixed-integer DC-OTS problem. This table also includes the number of instances that are not solved to optimality in less than one hour, and the maximum optimality gap among those unsolved instances. If we compare the benchmark SO with TO-N(1), we observe that the big-M values are significantly tightened, the average time is reduced by 12%, and the number of unsolved problems is also lower. On the contrary, the maximum gap increases from 0.69% to 1.69%. If the line capacities are reduced using the bounding problems, approaches SR-N(1) and TR-N(1) also involve computational savings that amount to 28% approximately. In any case, it seems that using the naive upper-bound cost leads to quite modest computational improvements. In order to improve the performance of the bound tightening proposed in this paper, the last three rows of Table <ref> provide the results if the upper-bound cost is obtained by the heuristic approach described in <cit.>. For the 100 instances of this case study, the average and maximum error incurred by this greedy approach amounts to 2% and 11%, respectively. Consequently, despite the valuable insights into generating costs provided by this heuristic procedure, the obtained solutions still exhibit a significant degree of suboptimality. By analyzing these results, we realize that using a tighter upper-bound on the operating cost has a more notable impact on the computational burden of the DC-OTS problem. For instance, even if the original line capacities are considered, the approach TO-H(1) is able to halve the computational time and the number of unsolved instances yielded by SO. Besides, even with one iteration, the approach TR-H(1) strengthens both the line capacities and big-M values by solving the proposed bounding problems and consequently reduces the computational time by 75% and only reports one unsolved instance. We continue this case study by analyzing the impact of the number of iterations through a comparison of the results collected in Table <ref>. Naturally, increasing the iterations leads to tighter bounds for approaches SR and TR. However, the bound values seem to stabilize after three or four iterations. It is also relevant to highlight that, for the same number of iterations, TR always outperforms SR, which clearly indicates that the big-M values obtained by the bounding problems (<ref>) are tighter than those computed by (<ref>), even if the line capacities are adjusted to more realistic values. In fact, the approach TR-H with one iteration yields better computational results than SR-H with four iterations. The best results are provided by TR-H(4), an approach that achieves a time reduction of 88% and is able to solve all instances in less than one hour. To conclude this case study, Figure <ref> illustrates the number of instances solved as a function of the computational time for the following three approaches: - PROPOSED: This is the cost-driven bound tightening approach proposed in this paper. Among all investigated methods, we choose TR-H(4) since it is the one that presents the best performance in the previous analysis. - FATTAHI: This methodology proposed by Fattahi et al. in <cit.> is based on determining the shortest-path through the spanning subgraph that connects the two nodes of every switchable line. This is the benchmark approach SO that represents the state-of-the-art in the technical literature. - GUROBI: This strategy consists in solving the non-linear OTS problem (<ref>) directly with Gurobi. Gurobi is able to handle the product of binary and continuous products by using a big-M linearization with bounds that are internally computed by the solver or by adding SOS1 variables. This figure allows us to draw the following conclusions. First, that the general-purpose procedure to linearize the product of binary and continuous variables implemented in Gurobi can be improved by using specific knowledge about the problem to be solved. For instance, using the power flow equations involved in the DC-OTS problem and graph theory, the shortest-path approach proposed in <cit.> provide tighter bounds than those determined internally by Gurobi to linearize the product of binary and continuous variables. Second, although the benchmark SO outperforms Gurobi, the obtained big-M values can still be loose and therefore, the computational time can still be substantial for some instances. Third, that the proposed cost-driven bound tightening methodology remarkably improves existing approaches and is able to solve the 100 random DC-OTS instances in less than 800 seconds and to reduce the average computational time by 88% with respect to the state-of-the-art methodology. § CONCLUSIONS The optimal transmission switching (OTS) aims at determining the network topology that minimizes the generating cost to satisfy a given demand. The OTS has the potential to generate substantial cost savings, but its computational requirements are high due to its typical formulation as a mixed-integer linear problem that belongs to the NP-hard class. In particular, the MIP formulation of the OTS includes big-M constants that can lead to poor relaxations if their values are too large. In this paper we propose an iterative tightening methodology that effectively reduces the big-M values, thereby alleviating the computational burden associated with the OTS. The proposed approach requires the solution of inexpensive bounding problems that account for economic information about the operating cost. Furthermore, the big-M values can be further tightened by incorporating reduced capacities of the transmission lines, which are in turn obtained by solving similar bounding problems. Using the 118-bus test system, we demonstrate that our methodology outperforms existing approaches to find proper big-M values and is able to reduce the computational of the OTS problem by 88% in average with respect to them. The proposed approach is dependent on the existence of a spanning subgraph of connected lines. Therefore, extending this approach to handle the general case, where any line can be disconnected, represents a promising direction for future research. IEEEtran
http://arxiv.org/abs/2306.10467v1
20230618033424
Precession and Lense-Thirring effect of hairy Kerr spacetimes
[ "Meng-He Wu", "Hong Guo", "Xiao-Mei Kuang" ]
gr-qc
[ "gr-qc" ]
=0.4 cm [email protected] School of Mathematics, Physics and Statistics, Shanghai University of Engineering Science, Shanghai 201620, China Center of Application and Research of Computational Physics, Shanghai University of Engineering Science, Shanghai 201620, China [email protected] School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China [email protected] (corresponding author) Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China =0.4 cm We investigate the frame-dragging effect of the hairy Kerr spacetimes on the spin of a test gyro and accretion disk. Firstly, we analyze Lense-Thirring (LT) precession frequency, geodetic precession frequency, and the general spin precession frequency of a test gyro attached to a stationary observer in the spacetime. We find that the black hole hair suppresses those precession frequencies in comparison with that occurs in Kerr spacetime in general relativity. Moreover, using those frequencies as probe, we differentiate the hairy Kerr black hole (BH) from naked singularity (NS). Specifically, as the observer approaches the central source along any direction, the frequencies grow sharply for the hairy Kerr BH, while for the hairy NS they are finite except at the ring singularity. Then, we investigate the quasiperiodic oscillations (QPOs) phenomena as the accretion disk approaches the hairy Kerr BH or NS. To this end, we analyze the bound circular orbits and their perturbations. We find that as the orbits approach the corresponding inner-most stable circular orbit (ISCO), both LT precession frequency and periastron precession frequency behave differently in the hairy Kerr BH and NS. Additionally, the hairy parameters have significant effects on the two frequencies. We expect that our theoretical studies could shed light on astrophysical observations in distinguishing hairy theories from Einstein's gravity, and also in distinguishing BH from NS in spacetime with hair. Precession and Lense-Thirring effect of hairy Kerr spacetimes Xiao-Mei Kuang July 31, 2023 ============================================================= § INTRODUCTION Einstein's general relativity (GR) is a successful theory in modern physics and passes plenty of test in astrophysics as well as astronomy. The existence of black holes is a prediction of GR, and black holes provide natural laboratories to test gravity in the strong field regime. Recent observations on gravitational waves <cit.> and the supermassive black hole images <cit.> agree well with the predictions of Kerr black hole described by the Kerr metric from GR. More observations, such as those from Next Generation Very Large Array <cit.> and Thirty Meter Telescope <cit.>, also provide significant properties in the regime of strong gravity of black hole spacetimes. In particular, these observations open a valuable window to explore, distinguish, or constrain physically viable black hole solutions that exhibit small deviations from the Kerr metric. On the other hand, due to the additional surrounding sources, the black holes in our Universe could obtain an extra global charge dubbed `hair' and the spacetime may deviate from the Kerr metric. Recently, a rotating hairy Kerr black hole was constructed with the use of the gravitational decoupling (GD) approach <cit.>, which is designed for describing deformations of known solutions of GR due to the inclusion of additional sources. The hairy Kerr black hole attracts lots of attentions. Plenty of theoretical and observational investigations have been done in this hairy Kerr black hole spacetime, for examples, thermodynamics <cit.>, quasinormal modes and (in)stability <cit.>, strong gravitational lensing and parameter constraint from Event Horizon Telescope observations <cit.>. There are many theoretical scenarios in which a visible singularity exists, especially after it was found in <cit.> that a spacetime with a central naked singularity can be formed as an equilibrium end state of the gravitational collapse of general matter cloud. However, whether naked singularity (NS) really exists in nature and what is its physical signature distinguished from black hole (BH) are still open questions. In Mathematics, the Kerr spacetime metric in GR, as a solution of Einstein field equations, describes Kerr BH and the Kerr singularity is contained in the event horizon, otherwise, if the event horizon disappears, the metric describes a NS. In shadow scenario, the shadow cast by NS spacetime was found to have similar size to that an equally massive Schwarzschild black hole can cast <cit.>. More elaborate discussions on shadow from NS have been carried out in <cit.>, which are important theoretical studies and match recent observations on the shadow of M87* and SgrA*. In addition, the orbital dynamics of particles or stars around a horizonless ultra-compact object could be another important physical signature which could be distinguishable from that around black holes, since the nature of timelike geodesics in a spacetime closely depends upon the geometrical essence of the spacetime. In this scenario, the timelike geodesics around different types of compact objects and the trajectory of massive particles or stars are widely studied <cit.> and reference therein. People expect that the nature of the precession of the timelike orbits can provide more information about the central compact object, as GRAVITY <cit.> and SINFONI <cit.> are continuously tracking the dynamics of `S' stars orbiting around SgrA*. For the timelike orbits around a rotating compact body, the geodetic precession <cit.> and Lense-Thirring (LT) precession <cit.> are two extraordinary effects predicted by GR. The former effect is also known as de Sitter precessions and it is due to the spacetime curvature of the central body, while the latter effect is due to the rotation of the central body which causes the dragging of locally inertial frames. One can examine the dragging effects by considering a test gyro based on the fact that a gyroscope tends to keep its spinning axis rigidly pointed in a fixed direction relative to a reference star. The LT precession and geodetic effects have been measured in the Earth’s gravitational field by the Gravity Probe B experiment, in which the satellite consists of four gyroscopes and a telescope orbiting 642 km above the Earth <cit.>. The geodetic precession in Schwarzschild BH and the Kerr BH have been studied in <cit.>. The LT precession is more complex and usually some approximations should be involved. In the weak field approximations, the magnitude of LT precession frequency is proportional to the spin parameter of the central body and decreases in the order of r^-3 with r the distance between the test gyro from the central rotating body <cit.>. In the strong gravity limit, the LT precession frequency were studied in various rotating compact objects, for instances, in Kerr black hole <cit.> and its generalizations <cit.>, in rotating traversable wormhole <cit.> and in rotating neutron star <cit.>. Those studies further show that the behavior of LT frequency in the strong gravity regime closely depends on the nature of the central rotating bodies. In particular, it was proposed in <cit.> that the spin precession of a test gyro can be used to distinguish Kerr naked singularities from black holes, which was then extended in modified theories of gravity <cit.>. Thus, one of the main aims of this paper is to investigate the spin precession frequency, including the LT frequency and geodetic frequency, of the test gyro in the hairy Kerr spacetime, and differentiate the hairy Kerr BH from hairy NS. We find a clear difference on the precession frequencies between hairy rotating BH and NS, and the effects of the hairy parameters are also systematically explored. Additionally, we also investigate the accretion disk physics in the hairy Kerr spacetime, which is realized by studying the orbital precession of a test timelike particle around the hairy Kerr BH and NS. The motivation stems from the followings. Accretion in Low-Mass X-ray Binaries (LMXBs) occurs in the strong gravity regime around compact bodies, in which the quasiperiodic oscillations (QPOs) phenomena involves and their frequencies in the Hz to kHz range have been detected <cit.>. Plenty of efforts have been made to explain the QPOs phenomena, see <cit.> for a review. It shows that the geodesic models of QPOs are related with three characteristic frequencies of the massive particles orbiting around the compact bodies, namely the orbital epicyclic frequency and the radial and vertical epicyclic frequencies. Therefore, since the QPOs provide a way to testify the strong gravity, these three frequencies could be used as a tool to study the crucial differences among the central compact objects <cit.> or test/constrain alternative theories of gravity <cit.>. In particular, more recently it was proposed in <cit.> that LT effect may have connection with QPOs phenomena and perhaps be used to explain the QPOs of accretion disks around rotating black holes. Thus, Along with using the spin precession frequency and LT precession frequency of the test gyro to indicate the differences between hairy Kerr BH and NS, we further study the three characteristic frequencies of massive particles orbiting around the corresponding central rotating objects. It is found that the LT precession and periastron precession of massive particle behave crucial differences between hairy Kerr BH and NS, as a support of the results from the test gyro in strong gravity regime of compact bodies. Our paper is organized as follows. In section <ref>, we briefly review the hairy Kerr spacetime derived from GD approach, and analyze the parameter spaces for the corresponding black hole and naked singularity. Then we derive the timelike geodesic equations in the spacetime. In section <ref>, we derive the spin precession frequency of a test gyro in the hairy Kerr spacetime. Then we compare the LT frequency as well as the spin frequency between the hairy Kerr BH and NS cases. In section <ref>, we study the accretion disk physics by analyzing inner-most stable circular orbit (ISCO), and LT precession and periastron precession in terms of the three characteristic frequencies, of a timelike particle around the hairy Kerr BH and NS, respectively. The last section is our conclusion and discussion. Throughout the paper, we use G_ N=c=ħ=1 and all quantities are rescaled to be dimensionless by the parameter M. § HAIRY KERR SPACETIME AND THE TIMELIKE GEODESIC EQUATIONS In this section, firstly, we will show a brief review on the idea of gravitational decoupling (GD) approach and the hairy spacetime constructed from GD approach by Ovalle <cit.>. Then we derive the timelike geodesic equations in the hairy Kerr spacetime. The no-hair theorem in classical general relativity states that black holes are only described by mass, electric charge and spin <cit.>. But it is possible that the interaction between black hole spacetime and matters brings in other charge, such that the black hole could carry hairs. The physical effect of these hairs can modify the spacetime of the background of black hole, namely hairy black holes may form. Recently, Ovalle et.al used the GD approach to obtain a spherically symmetric metric with hair <cit.>. The hairy black hole in this scenario has great generality because there is no certain matter fields in the GD approach, in which the corresponding Einstein equation is expressed by R_μν-1/2Rg_μν=8πT_μν. Here T_μν is the total energy momentum tensor written as T_μν=T_μν+ϑ_μν where T_μν and ϑ_μν are energy momentum tensor in GR and energy momentum tensor introduced by matter fields or others, respectively. ∇_μT_μν=0 is satisfied because of the Bianchi indentity. It is direct to prove that when ϑ_μν=0, the solution to (<ref>) degenerates into Schwarzschild metric. The hairy solution with proper treatment (strong energy condition) of ϑ_μν was constructed and the detailed algebra calculations are shown in <cit.>. Here we will not re-show their steps, but directly refer to the formula of the hairy metric ds^2=-f(r)dt^2+dr^2/f(r)+r^2(dθ^2+sin^2θ dϕ^2)   with   f(r)=1-2M/r+α e^-r/(M-l_0/2). In this solution, M is the black hole mass, α is the deformation parameter due to the introduction of surrounding matters and it describes the physics related with the strength of hairs, and l_0=α l with l a parameter with length dimension corresponds to primary hair which should satisfy l_0≤ 2M to guarantee the asymptotic flatness. The metric (<ref>) reproduces the Schwarzchild spacetime in the absence of the matters, i.e., α=0. Later, considering that astrophysical black holes in our universe usually have rotation, the authors of <cit.> induced the rotating counterpart of the static solution (<ref>), which is stationary and axisymmetric, and in Boyer-Lindquist coordinates it reads as ds^2= g_tt dt^2+g_rr dr^2+g_θθ dθ^2+g_ϕϕ dϕ^2+2g_tϕ dtdϕ = -(△-a^2sin^2θ/Σ)dt^2+sin^2θ(Σ+a^2sin^2θ(2-△-a^2sin^2θ/Σ)) dϕ^2+Σ/△dr^2+Σ dθ^2 -2asin^2θ(1-△-a^2sin^2θ/Σ)dtdϕ with Σ=r^2+a^2cos^2θ,   △=r^2+a^2-2Mr+α r^2 e^-r/(M-l_0/2). It is noticed that the above metric is also proved to satisfy the equations of motion in the GD approach <cit.>. The metric describes certain deformation of the Kerr solution due to the introduction of additional material sources (such as dark energy or dark matter). In the metric, a is the spin parameter and M is the black hole mass parameter. Similar to those in static case, α is the deviation parameter from GR and l_0 is the primary hair which is required to be l_0≤ 2M for asymptotic flatness. When α=0, the metric reduces to standard Kerr metric in GR, namely no surrounding matters. The metric (<ref>) could describe non-extremal hairy Kerr black hole, extremal hairy Kerr black hole and hairy naked singularity which correspond to two distinct, two equal and no real positive roots of g^rr=△=0 as well as Σ≠ 0, meaning they have two horizons, single horizon and no horizon, respectively. The parameter space (l_0-a) of the related geometries for some discrete α has been show in <cit.>. Here in FIG. <ref>, we show a 3D plot in the parameters space (l_0,a,α) for various geometries. In the figure, for the parameters in the white region, the spacetime with the metric (<ref>) is a non-extremal hairy Kerr black hole with two horizons, while it is a naked singularity without horizon for the parameters in the shaded region; and for the parameters on the orange surface, it is an extremal hairy Kerr black hole with single horizon. Besides, how the hairy parameters affect the static limit surface and the ergoregion of the hairy Kerr black hole has also been explored in <cit.>. Moreover, as we aforementioned, some theoretical and observational properties of the hairy Kerr black hole have been carried out, such as the thermodynamics <cit.>, quasinormal modes and (in)stability <cit.>, strong gravitational lensing and parameter constraint from Event Horizon Telescope observations <cit.>. In the following sections, we shall partly study the physical signatures in the strong field regime of those different central objects in the hairy Kerr spacetime (<ref>). We will analyze the inertial frame dragging effects on a test gyro and on the accretion physics respectively, which is expected to show the difference between black hole and naked singularity. To proceed, we should analyze the timelike geodesic equations in the hairy Kerr spacetime. Since for the spacetime, we have two Killing vector fields ∂_t and ∂_ϕ, so we can define the conserved energy ℰ and axial-component of the angular momentum L of which the expressions are ℰ=-g_ttṫ-g_tϕϕ̇,      L=g_tϕṫ +g_ϕϕϕ̇, where the dot represent the derivative with respect to the affine parameter λ. We employ the Hamilton-Jacobi method <cit.> and introduce the Hamilton-Jacobi equation for the particle with rest mass μ ℋ=-∂ S/∂λ=1/2g_μν∂ S/∂ x^μ∂ S/∂ x^ν=μ^2 where ℋ and S are the canonical Hamiltonian and Jacobi action, respectively, and x^μ denote the coordinates t,r,θ,ϕ in the metric (<ref>). Then after variation, we find that we can separate the Jacobi action as S=1/2μ^2λ-ℰt+Lϕ+S_r(r)+S_θ(θ) and define the constant 𝒞 via (dS_θ/dθ)^2+(L-aℰsin^2θ)^2/sin^2θ= -Δ(dS_r/dr)^2+((r^2+a^2)ℰ^2-aL)^2/Δ=𝒞 to separate the outcome into four first-order differential equations describing the geodesic motion Σṫ = a(L-aℰsin^2θ)+r^2+a^2/Δ((r^2+a^2)ℰ-aL), Σϕ̇ = L/sin^2ϑ-aℰ+a/Δ((r^2+a^2)ℰ-aL), Σṙ = Δ(dS_r/dr)=√((r^2+a^2)ℰ-aL)^2-Δ(Q+(L-aℰ)^2+μ^2r^2)), Σθ̇ = dS_θ/dθ=√(Q+cos^2θ(a^2(ℰ^2-μ^2)-L^2/sin^2θ)), where Q≡𝒞-(L-aℰ)^2 with 𝒞 defined in (<ref>) is the Carter constant. Thus, the geodesics equations in the hairy Kerr spacetime (<ref>) are separable as in Kerr case, and the modification from the hair appears in the metric function Δ. Now we can proceed to investigate the influence of hair on the orbital precession. For convenience, we will assume that the gyros/stars/particles we will consider are minimally coupled to the metric, and also there is no direct coupling between them and the surrounding matters, i.e., the modified gravity effect or print of hair would only be reflected in the metric functions. § SPIN PRECESSION OF TEST GYROSCOPE IN HAIRY KERR SPACETIME In this section, we will study the spin precession frequency of a test gyroscope attached to a stationary observer, dubbed stationary gyroscopes for brevity in the hairy Kerr spacetime, for whom the r and θ coordinates are remaining fixed with respect to infinity. Such a stationary observer has a four-velocity u_stationary^μ=u_stationary^t(1,0,0,Ω), where t is the time coordinate and Ω=dϕ/dt is the angular velocity of the observer. Let us consider stationary gyroscopes moving along Killing trajectory, whose spin undergoes Fermi-Walker transport along u=(-K^2)^-1/2K with K the timelike Killing vector field. Since the hairy Kerr spacetime (<ref>) has two Killing vectors which are the time translation Killing vector ∂_t and the azimuthal Killing vector ∂_ϕ, so a more general Killing vector is K=∂_t+Ω∂_ϕ. In this case, the general spin precession frequency of a test stationary gyroscope is the rescaled vorticity filed of the observer congruence, and its one-form is given by <cit.> Ω_p=1/2K^2*(K∧ dK) where K is the covector of K, the denotations * and ∧ represent the Hodge dual and wedge product, respectively. According to the pedagogical steps of <cit.>, the corresponding vector of the covector Ω_p is Ω⃗_p=ε_ckl/2√(-g) ( 1+2Ωg_0c/g_00+Ω ^2g_cc/g_00) [ ( g_0c,k-g_0c/g_00g_00,k) +Ω( g_cc,k-g_cc/g_00g_00,k) +Ω ^2( g_0c /g_00g_cc,k-g_cc/g_00g_0c,k) ]∂_l, where g is the determinant of the metric g_μν (μ,ν=0,1,2,3) and ε _ckl (c,k,l=1,2,3) is the Levi-Civita symbol. Then in the hairy Kerr spacetime (<ref>) which is stationary and axisymmetric spacetime, the above expression can be reduced as Ω⃗_p= (C_1r̂ + C_2 θ̂)/2√(-g)( 1+2Ωg_tϕ/g_tt+Ω ^2g_ϕϕ/g_tt) , with C_1= -√(g_r r)[(g_t ϕ, θ-g_t ϕ/g_t t g_t t, θ)+Ω(g_ϕϕ, θ-g_ϕϕ/g_t t g_t t, θ)+Ω^2(g_t ϕ/g_t t g_ϕϕ, θ-g_ϕϕ/g_t t g_t ϕ, θ)], C_2= √(g_θθ)[(g_tϕ,r-g_tϕ/g_ttg_tt,r) +Ω(g_ϕϕ,r-g_ϕϕ/g_ttg_tt,r) +Ω^2(g_tϕ/g_ttg_ϕϕ,r-g_ϕϕ/g_ttg_tϕ,r) ]. It is worth to emphasize that this expression is valid for observers both inside and outside of the ergosphere for a restricted range of Ω Ω _-(r,θ )<Ω (r,θ )<Ω _+(r,θ ), with Ω _±=(-g_tϕ±√(g^2_tϕ-g_ϕϕg_tt))/g_ϕϕ, which could ensure that the observer at fixed r and θ is timelike. Since the gyroscope has an angular velocity Ω, the general precession frequency (<ref>) stems from two aspects: spacetime rotation (LT precession) and curvature (geodetic precession). To disclose the properties of precession of gyroscope in hairy Kerr spacetime, we will study the influence of hairy parameters on the LT precession, geodetic precession and the general spin precession of the hairy Kerr black holes and naked singularities, respectively. §.§ Case with Ω=0: LT precession frequency When the angular velocity vanishes, saying Ω=0, it means that the gyroscope is attached to a static observer in a stationary spacetime. Such static observers do not change their location with respect to infinity only outside the ergoregion. It corresponds to a four-velocity u_static^μ=u_static^t(1,0,0,0) and the Killing vector K=∂_t. Thus, Ω⃗_p reduces to the LT precession frequency, Ω⃗_LT, of the gyroscope attached to a static observer outside the ergosphere <cit.>. Subsequently, the LT precession frequency in the hairy Kerr spacetime (<ref>) takes the form Ω⃗_LT= 1/2√(-g){ -√(g_rr)(g_tϕ,θ-g_tϕ/g_ttg_tt,θ) r̂+ √(g_θθ)(g_tϕ,r-g_tϕ/g_ttg_tt,r) θ̂}. Samples of the above LT-precession frequency vector for hairy Kerr black holes and naked singularities are plotted in Cartesian plane shown in FIG.<ref>. The upper row exhibits the vector for hairy Kerr BH, giving that the LT precession frequency is finite outside the ergoshpere and it diverges when the observer approach the ergosphere. In the bottom row for hairy NS, the LT precession frequency is regular in the entire spacetime region except the ring singularity r=0 and θ=π/2. The magnitude of LT precession frequency is given by Ω_LT= 1/2√(-g)√(g_rr(g_tϕ,θ-g_tϕ/g_ttg_tt,θ)^2 + g_θθ(g_tϕ,r-g_tϕ/g_ttg_tt,r)^2) which are shown in FIG. <ref> and FIG. <ref> for samples of parameters. In FIG. <ref> for hairy Kerr BH, the LT precession frequency diverges at the ergosphere. In the region far away from the static limit, the LT precession frequency hardly affects by various parameters as expected. When the observer goes near the static limit, the LT precession frequency increases as the angle and spin parameter increase, similar to that in the case of Kerr BH <cit.>; in addition, it becomes smaller for larger α or smaller l_0. In FIG. <ref> for hairy NS, the LT precession frequency is always finite and possesses a peak. Similar to that in Kerr case <cit.>, as the angle increases, both the peak and frequency near the ring singularity increase, while the spin parameter has the opposite influence. Moreover, the hairy parameter l_0 also enhances the peak but α suppresses it; while the hairy parameters have no imprint on the frequency at the ring singularity. Though the concrete expression of the LT precession frequency are complex, we can analytically reduce the behaviors at r→ 0 as Ω_LT|_r→ 0=^2(θ)tan(θ)/a^2. Therefore, it is obvious that LT precession frequency at r→ 0 is independent of the hairy parameters l_0 and α, and it is finite unless θ=π/2, which are consistent with what we see in FIG.<ref>. §.§ Case with a=0: Geodetic precession frequency The metric (<ref>) with a=0 describes hairy Schwarzschild spacetime. In this case, the precession frequency (<ref>) does not vanish, though the LT-precession frequency (<ref>) vanishes. The non-vanishing sector of the spin precession is known as the geodetic precession due to the curvature of the spacetime, and its formula is Ω⃗_p|_a=0= 1/2√(-g)( 1+Ω ^2g_ϕϕ/g_tt) [ -Ω√(g_rr)(g_ϕϕ,θ-g_ϕϕ/g_ttg_tt,θ)r̂+ Ω√(g_θθ)(g_ϕϕ,r-g_ϕϕ/g_ttg_tt,r) θ̂]. To proceed, we can choose the observer in the equatorial plane by setting θ=π/2 without loss of generality due to the spherical symmetry. Then we have Ω_p|_a=0=Ω√(g_θθ)(g_ϕϕ,r -g_ϕϕ/g_ttg_tt,r)/2√(-g)( 1+Ω ^2g_ϕϕ/g_tt) =(3l_0 M-6M^2+e^2r/l_0-2Mr^2α-l_0 r(1-e^2r/l_0-2Mα)+2M r(1+e^2r/l_0-2Mα))Ω/(l_0-2M)(2M+r(-1-e^2r/l_0-2Mα+r^2Ω^2)), meaning that the gyroscope moving in the hairy Schwarzschild spacetime will also precess. Therefore, in this case, the frequency for the gyro moving along a circular geodesic could be the Kepler frequency <cit.> Ω_Kep=√((M-α r^2e^-r/(M-l_0/2)/(2M-l_0))/r^3) , with which we can induce Ω_p|_a=0,Ω=Ω_Kep=Ω_Kep=√(M/r^3-α e^-r/(M-l_0/2)/(2M-l_0)r). The above expression gives the precession frequency in the Copernican frame, computed with respect to the proper time τ, which is related to the coordinate time via dτ=√(1-M/r^3+α(1+r/(2M-l_0))e^-r/(M-l_0/2)) dt. Therefore, in the coordinate basis, the geodetic precession frequency is Ω_geodetic=(1-√(1-M/r^3+α(1+r/2M-l_0)e^-r/M-l_0/2))√(M/r^3-α e^-r/(M-l_0/2)/(2M-l_0)r). It is obvious that when the deviation parameter, α, vanishes, the above expression reproduces the geodetic precession of the Schwarzschild black hole found in <cit.>. How the model parameters α and l_0 affect the geodetic precession of the hairy Schwarzschild BH is shown in FIG.<ref>. We see that the geodetic precession is suppressed by the hair comparing to that in GR, and its deviation from that in Schwarzschild black hole is more significant as the α increases. Moreover, the geodetic precession frequency is enhanced by larger l_0. §.§ Spin precession frequency in hairy Kerr black hole and naked singularity We move on to study the general spin precession frequency (<ref>) in hairy Kerr spacetime, and analyze its difference between hairy Kerr BH and NS. As we aforementioned, the angular velocity has a restricted range for the timelike stationary observers . So, we introduce a parameter 0<k<1 so that the angular velocity Ω can be rewritten in terms of Ω_± as Ω =kΩ _++(1-k)Ω _-=(2k-1)√(g^2_tϕ-g_ϕϕg_tt)-g_tϕ/g_ϕϕ. It is obvious that for k=1/2, this expression can be reduced as Ω|_k=1/2=-g_tϕ/g_ϕϕ, with which the observer is called the zero-angular-momentum observer (ZAMO) <cit.>. It was addressed in <cit.> that the precession frequency of the gyroscope attached to ZAMO in the Kerr black hole spacetime has different behavior from a gyroscope attached to other observers with other angular velocities, because these gyros has no rotation with respect to the local geometry and stationary observer. Substituting (<ref>) into (<ref>), we can obtain the general spin precession frequency in terms of the parameter k, thus the magnitude of spin precession frequency is written as Ω_p= √(C_1^2 + C_2^2)/2√(-g)( 1+2Ωg_tϕ/g_tt+Ω ^2g_ϕϕ/g_tt) , where C_1 and C_2 are defined in (<ref>). The magnitude of the general spin precession frequencies with different k for the hairy Kerr BH and NS are depicted in FIG. <ref>, where k=0.1, 0.5, 0.9 from the left to right column. In the first row, we show the results for black hole with a=0.9M, α=1 and l_0=1M. It is obvious that for hairy Kerr BH, the spin precession frequency of a gyroscope attached to any observer beyond ZAMO, always diverges whenever it approaches the horizons along any direction. However, Ω_p remains finite for ZAMO observer everywhere including at the horizon. In the second row, we show the results for hairy naked NS with a=0.9M, α=2 and l_0=1M. It shows that the spin precession frequency for NS is finite even as the observer approach r = 0 along any direction except from the direction θ=π/2. More results for different model parameters are shown in FIG.<ref> where we have set k=0.4 and θ=π/4. Again, it is convenient to verity that Ω_p|_r→ 0, θ=π/4=√(a^2(1-2k)^2+(17+4(k-1)k(17+8(k-1)k))M^2+12√(2)(1-2k(2+k(2k-3)))M^2/8a^4(k-1)^2k^2) which is independent of α and l_0. § LT PRECESSION AND PERIASTRON PRECESSION OF ACCRETION DISK PHYSICS AROUND HAIRY KERR SPACETIME In this section, we will study the LT precession frequency and the periastron precession frequency of accretion disk physics around different geometries in the hairy Kerr spacetime, as the related physics could help to test the strong gravity of the central compact bodies. We expect that this study could further differentiate the hairy Kerr black hole and naked singularity as the central objects. To this end, we should study the geodesic motion of a test massive particle around the hairy Kerr spacetime. To study the accretion disk physic via the orbits of test massive particle, we should fix the stable circular orbit around the central bodies and then perturb it. One important stable circular orbit is the inner-most stable circular orbit (ISCO) which is the last or smallest stable circular orbit of the particle. Therefore, we will first recall the procedure of deriving an orbit equation to introduce the bound orbits, circular orbits and ISCO in the hairy Kerr spacetime. Then we explore the LT precession frequency and the periastron precession frequency by perturbing the circular orbit in equatorial plane. For convenience, we shall focus on the orbits in the equatorial plane of the spacetime. §.§ Bound orbit and ISCO For a test particle moving along the timelike geodesic with the four velocity p^μ in hairy Kerr spacetime (<ref>), we have two conserved quantities for the massive particle defined in (<ref>). So for the orbit in the equatorial plane with θ=π/2, we can solve out p^t = g_ϕϕℰ+g_tϕL/g_tϕ^2-g_ttg_ϕϕ|_θ=π/2=r^3ℰ+a^2 ℰ(2 M+r-e^2 r/l_0-2 M r α)+a L(-2 M+e^2 r/l_0-2 M r α)/r(a^2+r(-2 M+r+e^2 r/l_0-2 M r α)), p^ϕ = -g_tϕℰ+g_ttL/g_tϕ^2-g_ttg_ϕϕ|_θ=π/2=aℰ(2 M-e^2 r/l_0-2 M r α)+ L(-2 M+r+e^2 r/l_0-2 M r α)/r(a^2+r(-2 M+r+e^2 r/l_0-2 M r α)). Inserting the above formulas into the normalization condition p^μ p_μ=-1 for timelike geodesic, we can obtain the radial velocity as p^r = ±√(-1-g_tt(p^t)^2-2g_tϕp^tp^ϕ-g_ϕϕ(p^ϕ)^2/g_rr)|_θ=π/2 = ±√((ℰ^2-1-e^2 r/l_0-2 Mα)+2M/r-L^2-a^2(ℰ^2-1)+e^2 r/l_0-2 Mα(L-aℰ)^2/r^2+2M(L-aℰ)^2/r^3) where ± corresponds to the radially outgoing and incoming cases, respectively. Consequently, the effective potential of the equatorial timelike geodesics is V_eff(r) = E-1/2(p^r)^2 = -M/r+1/2e^2 r/l_0-2 Mα+L^2-a^2(ℰ^2-1)+e^2 r/l_0-2 Mα(L-aℰ)^2/2r^2-M(L-aℰ)^2/r^3 where E=1/2(ℰ^2-1) is the total relativistic energy of the test particle. For the stable bound orbits, the total energy should not be smaller than the minimal effective potential which is determined by dV_eff/dr|_r_m=0,    d^2V_eff/dr^2|_r_m>0. It is difficult to give the analytical expression of r_m and the minimal effective potential V_min, so the we show a sample of V_eff in FIG. <ref> with given E=-1/1000 and various L. It is obvious that both r_m and V_min becomes larger for larger L, similar to that found in Kerr spacetime <cit.>. The hairy parameters affect V_min and its location. Using the bound orbit condition V_min≤ E<0 <cit.>, we will explicitly show the shape of the orbit. The geodesic could give us how u=1/r changes with respect to ϕ in the way du/dϕ=-u^2p^r/p^ϕ=1+e^2/(l_0-2M)uα-2Mu+a^2u^2/-L+e^2/(l_0-2M)u(aℰ-L)α+2(L-aℰ)MuX_u, with X_u = √(ℰ^2-1-e^2/(l_0-2M)uα+u(2M+Y_u)), Y_u = u(a^2(ℰ^2-1)-L^2-e^2/(l_0-2M)u(L-aℰ)^2α+2(L-aℰ)^2Mu), such that d^2u/dϕ^2=(4(aℰ-L)A_uC_u^-+4(C_u^-+a^2(l_0-2M)u^3)B_uX_u^2+Z_u)A_u/2(l_0-2M)B_u^3 with Z_u = B_u A_u(2e^2/(l_0-2M)uα+(l_0-2M)u^2(2M+Y_u)+u^2((l_0-2M)Y_u+2(aℰ-L)^2C_u^+)), A_u = 1+e^2/(l_0-2M)uα-2Mu+a^2u^2, B_u = L-e^2/(l_0-2M)uα(aℰ-L)+2(aℰ-L)Mu, C_u^± = ± e^2/(l_0-2M)uα+(l_0-2M)Mu^2. Then by numerically integrating the above orbital equation, we can figure out the shape of the bound orbit of a test particle freely falling in the hairy Kerr BH and NS spacetime. We show the bound orbits in Kerr hairy black hole (a=0.4M) and naked singularity (a=1.2M) in FIG. <ref>, where we fix α=1, l_0=1M, L=12 and E=-1/1000. In the figure, we also simultaneously plot the corresponding results for Kerr spacetime denoted by black dotted curves, namely with α=0. The differences between BH and NS are slight both in hairy Kerr and Kerr spacetimes. An interesting type of bound orbit is the circular or spherical orbit which satisfies V_eff|_r_c=0,    dV_eff/dr|_r_c=0, with r_c the radius of the circular orbit. The bound circular orbit could either be stable or unstable depending on the sign of d^2V_eff/dr^2. d^2V_eff/dr^2> 0 means that the orbit is stable while for d^2V_eff/dr^2<0 it is unstable. Thus, one can define the innermost stable circular orbit (ISCO), known as the smallest stable marginally bound circular orbit which satisfies the above conditions accompanying with the vanishing second order derivative <cit.>, meaning that the orbital radius, r_ISCO, is determined by V_eff|_r_ISCO=0,   dV_eff/dr|_r_ISCO=0,    d^2V_eff/dr^2|_r_ISCO= 0. The exact formula of r_ISCO for Kerr spacetime was solved out in <cit.>, and it was r_ISCO=6M as a=0 for Schwarzschild spacetime. Again, due to the existence of exponential term, the expression of the r_ISCO for hairy Kerr spacetime becomes difficult, so we numerically obtain r_ISCO of the hairy Kerr BH and NS. The values of r_ISCO as a function of spin parameter are depicted in FIG.<ref>, from which we see that r_ISCO is smaller for faster spinning hairy Kerr BH while it is larger for fast spinning hairy Kerr NS, which is similar to those occur in Kerr spacetime indicated by black curve in each plot. In addition, r_ISCO for hairy Kerr BH increases as l_0 increases, but decreases as α increases; while the dependence of r_ISCO for the hairy NS on the hairy parameters is closely determined by the spinning of the central object. §.§ LT precession and periastron precession We move on to study the LT precession frequency and the periastron precession of the circular orbit by perturbing the geodesic equation of the massive particle, which could disclose important features of accretion disk around the central bodies, such that testify the strong gravity of the hairy Kerr spacetime. To proceed, we have to model the three fundamental frequencies which are very important for accretion disk physics around the hairy Kerr spacetime. For a test massive particle moving along a circle in the equatorial plane of the metric (<ref>), the orbital angular frequency, Ω_ϕ, is Ω_ϕ=dϕ/dt=-g_tϕ,r+√((g_tϕ,r)^2-g_tt,r g_ϕϕ,r)/g_ϕϕ,r. If the particle is perturbed, then it oscillates with some characteristic epicyclic frequencies Ω_r and Ω_θ in the radial or vertical direction, respectively, which can be obtained by perturbing the geodesic equation as <cit.> Ω_r={1/2g_rr[X^2∂_r^2(g_ϕϕ/g_ttg_ϕϕ-g_tϕ^2)-2XY∂_r^2(g_tϕ/g_ttg_ϕϕ-g_tϕ^2)+Y^2∂_r^2(g_tt/g_ttg_ϕϕ-g_tϕ^2)]}^1/2, Ω_θ={1/2g_θθ[X^2∂_θ^2(g_ϕϕ/g_ttg_ϕϕ-g_tϕ^2)-2XY∂_θ^2(g_tϕ/g_ttg_ϕϕ-g_tϕ^2)+Y^2∂_θ^2(g_tt/g_ttg_ϕϕ-g_tϕ^2)]}^1/2 with X=g_tt+g_tϕΩ_ϕ,     Y=g_tϕ+g_ϕϕΩ_ϕ. Subsequently, we can extract the nodal precession frequency Ω_nod and periastron precession frequency Ω_per as <cit.> Ω_nod=Ω_ϕ-Ω_θ, Ω_per=Ω_ϕ-Ω_r which measure the precession of orbital plane and orbit of the accretion disk, respectively. The nodal precession frequency is also known as LT precession frequency. Focusing on the equatorial plane with θ=π/2, we plot Ω_nod and Ω_pre for hairy Kerr BH and NS with samples of parameters in FIG.<ref>, in which the red curves are for NS while the blue curves are for BH, and the vertical lines correspond to the corresponding location of ISCO. It is obvious that the LT precession frequency increases monotonously as the orbit approaches the ISCO of the hairy Kerr BH. While in the hairy NS spacetime, as the orbit moves towards the ISCO, Ω_nod first increases to certain peak and then decreases. In addition, Ω_nod at the ISCO of NS is always smaller than that at ISCO of BH, and it even can be negative indicating a reversion of the precession direction. The periastron precession frequency Ω_per in hairy Kerr BH and NS spacetimes has similar behavior that is increasing with the decrease of r. The value of Ω_per for hairy Kerr BH is always larger than that for NS, and their difference is more significant as the orbit becomes smaller. The effect of different model parameters on the LT precession frequency are shown in FIG.<ref> which indicates that the parameters indeed have significant imprint on the LT frequencies. § CONCLUSION AND DISCUSSION Gravitational wave and shadow are two important observations to test strong gravity physics, such that they allow us to probe the structure of black holes. Thus, they could also be used to reveal the scalar fields provided that they leave an imprint on BH. The well known no-hair theorem in Einstein's GR predicts that the rotating black holes are described by the Kerr metric. However, beyond GR with additional surrounding sources, the hairy rotating black holes should be described by a deformed Kerr metric, including extra hairy parameter. A hairy Kerr black hole was recently constructed using the gravitational decoupling approach, describing deformations of Kerr black hole due to including additional sources <cit.>, Observational investigations related with gravitational wave and shadow of the hairy Kerr black hole have been studied in <cit.>, which indicates that the hairy Kerr black hole could not be ruled out by the current observations. In this paper, we focused on the Lense-Thirring effect, another important observable effect, to differentiate the hairy Kerr black hole from naked singularity. Firstly, we analyzed the spin precession of a test gyro attached to a stationary observer in the hairy Kerr spacetime. When the observer is static with respect to a fixed star, i.e, the the angular velocity vanishes, we calculated the LT precession frequency. It was found that the LT precession frequency diverges as the observer approaches the ergosphere of the hairy Kerr BH along any direction, while it keeps finite in the whole region of the hairy NS, except at the ring singularity. Then, we parameterized the range of angular velocity of a stationary observer by k, and systematically studied the general spin precession frequency. The general spin precession frequency diverges as the observer approaches the horizon of the hairy Kerr BH, but it is finite when k=0.5 defining ZAMO observer because in this case the test gyro has no rotation with respect to the local geometry. For the hairy NS, it is always finite unless the observer reaches the ring along the direction θ=π/2. We also obtained the geodetic precession for observers in a hairy static black hole. The general spin frequency, LT frequency and geodetic frequency all decrease as the parameter α (l_0) increases (decreases) in hairy Kerr black hole. And α and l_0 have similar effect on the LT frequency in hairy NS as that in BH case, but their effects on general spin frequency in NS depend on the spinning. Then, we investigated the quasiperiodic oscillations (QPOs) phenomena as the accretion disk approaches the hairy Kerr BH and NS, which also show difference. To this end, we first analyzed the orbital precession of bound orbits, and ISCO of a test massive particle orbiting in the equatorial plane of the hairy Kerr BH and NS spacetime, respectively. Then we perturbed the stable circular orbit and computed the three fundamental frequencies related with QPOs phenomena. Accordingly, our results show that as the orbit moves towards the ISCO, the LT frequency increases monotonously in hairy Kerr BH, while it first increases to certain peak and then decreases in hair NS; the periastron frequency increases in both hairy Kerr BH and NS as the orbit approaches the corresponding ISCO. Moreover, the hairy parameters indeed have effects on the LT and periastron frequencies. In conclusion, we do theoretical evaluation on various precession frequencies caused by the frame-dragging effect of the central sources, which differentiate the hairy Kerr BH from NS spacetime. We expect that our theoretical studies could shed light on astrophysical observations on distinguishing hairy theory from GR, distinguishing BH from NS and even further constraining the hairy parameters. This work is partly supported by Natural Science Foundation of Jiangsu Province under Grant No.BK20211601, Fok Ying Tung Education Foundation under Grant No. 171006. apsrev
http://arxiv.org/abs/2306.11466v1
20230620114101
Comprehensive Training and Evaluation on Deep Reinforcement Learning for Automated Driving in Various Simulated Driving Maneuvers
[ "Yongqi Dong", "Tobias Datema", "Vincent Wassenaar", "Joris van de Weg", "Cahit Tolga Kopar", "Harim Suleman" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.LG", "cs.SY", "eess.SY" ]
[ FRANCESCO NAVARRA Received ; accepted ======================== empty empty Developing and testing automated driving models in the real world might be challenging and even dangerous, while simulation can help with this, especially for challenging maneuvers. Deep reinforcement learning (DRL) has the potential to tackle complex decision-making and controlling tasks through learning and interacting with the environment, thus it is suitable for developing automated driving while not being explored in detail yet. This study carried out a comprehensive study by implementing, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO), for training automated driving on the highway-env simulation platform. Effective and customized reward functions were developed and the implemented algorithms were evaluated in terms of onlane accuracy (how well the car drives on the road within the lane), efficiency (how fast the car drives), safety (how likely the car is to crash into obstacles), and comfort (how much the car make jerks, e.g., suddenly accelerates or brakes). Results show that the TRPO-based models with modified reward functions delivered the best performance in most cases. Furthermore, to train a uniform driving model that can tackle various driving maneuvers besides the specific ones, this study expanded the highway-env and developed an extra customized training environment, namely, ComplexRoads, integrating various driving maneuvers and multiple road scenarios together. Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance. Lastly, several functionalities were added to the highway-env to implement this work. The codes are open on GitHub at https://github.com/alaineman/drlcarsimhttps://github.com/alaineman/drlcarsim. § INTRODUCTION Artificial intelligence (AI) is making huge improvements in various fields, one of which is automated driving <cit.>. One typical type of AI that is well-suitable for developing automated driving models is Deep Reinforcement Learning (DRL) <cit.>. DRL makes use of the advantage of deep neural networks regarding feature extraction and the advantage of reinforcement learning regarding learning from interacting with the environment. DRL exhibits excellent performance in various decision-making tasks, e.g., GO <cit.> and playing video games <cit.> and it has been employed in various automated driving tasks <cit.>, e.g., lane-keeping, lane-changing, overtaking, ramp merging, and driving through intersections. For the lane-keeping task, Sallab et al. <cit.> developed DRL-based methods for delivering both discrete policies using Deep Q-network (DQN) and continuous policies using Deep Deterministic Actor-Critic Algorithm (DDAC) to follow the lane and maximize average velocity when driving on the curved race track on Open Racing Car Simulator (TORCS). Similarly, for the lane-changing task, Wang et al. <cit.> trained a DQN-based model to perform decision-making of lane-keeping, lane change to the left/right, and acceleration/deceleration, so that the trained agent can intelligently make a lane change under diverse and even unforeseen scenarios. Furthermore, Zhang et al. <cit.> proposed a bi-level lane-change behavior planning strategy using DRL-based lane-change decision-making model and negotiation-based right-of-way assignment model to deliver multi-agent lane-change maneuvers. For the overtaking task, Kaushik et al. <cit.> adopted Deep Deterministic Policy Gradients (DDPG) to learn overtaking maneuvers for an automated vehicle in the presence of multiple surrounding cars, in a simulated highway scenario. They verified their curriculum learning resembled approach can learn to smooth overtaking maneuvers, largely collision-free, and independent of the track and number of cars in the scene. For the ramp merging task, Wang and Chan <cit.> employed Long-Short Term Memory (LSTM) neural network to model the interactive environment conveying internal states containing historical driving information to a DQN which then generated Q-values for action selection regarding on-ramp merging. Additionally, for negotiating and driving through intersections, Isele et al. <cit.> explored the effectiveness of the DQN-based DRL method to handle the task of navigating through unsignaled intersections. While Guo and Ma <cit.> developed a real-time learning and control framework for signalized intersection management, which integrated both vehicle trajectory control and signal optimization using DDPG-based DRL learning directly from the dynamic interactions between vehicles, traffic signal control, and traffic environment in the mixed connected and automated vehicle (CAV) environment. It is observed that although many studies have utilized DRL for various driving tasks, most of them focus on only one specific driving maneuver, seldom do they evaluate the DRL model performance across different maneuvers, and neither do they explore the adaptability of DRL models trained on one specific environment but tested in other various maneuvers. This study tries to fill this research gap by implementing, evaluating, and comprehensively comparing the performance of two DRLs, i.e., DQN and TRPO, on various driving maneuvers. Customized effective reward functions were developed and the implemented DRLs were evaluated in terms of various aspects considering driving safety, efficiency, and comfort level. This study also constructed a new simulation environment, named `ComplexRoads' (shown in Fig <ref>), integrating various driving maneuvers and multiple road scenarios. The ComplexRoads served to train a uniform driving model that can tackle various driving tasks. And to verify that, the models trained only on ComplexRoads were tested and evaluated in the specific driving maneuvers. Intensive experimental results demonstrated the effectiveness of this customized training environment. To advance the learning capability for the developed DRL-based AI models, i.e. encouraging relational insight, besides designing ComplexRoads, several built-in functions of the package were also upgraded. Notable modifications are summarized as follows: the tacking of the `current' lane with respect to the car (training agent) was upgraded to take into account lane heading to eliminate confusing transitions when driving off-road. Furthermore, the distance between the car and its current lane was upgraded to a signed value to allow for orientation distinction. Similarly, the lane heading difference, LHD for short, was adjusted to also be a signed value. These improvements yield increased learning abilities for both on-road driving, returning to on-road driving when off-road, and a general sense of `awareness' given an arbitrary environment. § METHODOLOGY §.§ System Framework The general DRL learning cycle is an iterative learning process based on the agent's performance in the environment influenced by the agent's actions. In mathematical terms, automated driving can be modeled as a Markov Decision Process (MDP) <cit.>. MDP captures the features of sequential decision-making. The components of an MDP include environments, agents, actions, rewards, and states. In this study, the system framework which illustrates the corresponding MDP is depicted in Fig <ref>. The system generally consists of five main elements, i.e., environment, agent, action, state, and reward, which will be elaborated in detail in this section. §.§ DRL MDP Elements Environment: To simulate the MDP, this study adopted the highway-env platform <cit.>, which is a Python-based package that offers a variety of driving environments. As a widely used platform, ample research has been conducted using the highway-env, such as <cit.>. In the highway-env, six dedicated driving scenarios are available, i.e., Highway, Merge, Roundabout, Intersection, Racetrack, and Parking. Users can also customize environments by specifying the number of lanes, the size of the roundabout, and other parameters. In this study, all the driving scenarios, except for the Highway and Parking, are covered. For training and evaluating uniform driving model, this study designed a new simulation environment, named `ComplexRoads' (shown in Fig <ref>). `ComplexRoads' integrates two highway merging scenarios, two four-way intersections, two roundabouts, and several segments of multi-straight lanes. The DRL models trained only on ComplexRoads were tested and evaluated in the specific driving maneuvers originally available on highway-env. Agent: A kinematic bicycle model is used to represent the vehicle as the agent of MDP. Despite its simplicity, a kinematic bicycle model is able to represent actual vehicle dynamics <cit.>. Action: An action taken by the agent in the proposed MDP is an element from the contracted Action Space. In this study, the two dimensions of the Action Space 𝒜 are: acceleration (throttle) and steering angle of the front wheels. Depending on the DRL algorithm 𝒜 is either of the form [-π/2,π/2]×[-5,5] for algorithms requiring a continuous action space, or {δ_1,…,δ_n}×{α_1,… α_m} in the n × m discrete case. Hence, (δ,α) ∈𝒜, where steering is denoted by δ and acceleration is denoted by α. State: As illustrated in Fig <ref>, the state in the proposed MDP includes the ego AV’s state, e.g., location (x,y), velocity (v_x,v_x), and heading direction, together with the surrounding vehicles state and road conditions, which could be extracted through the input gray images. Reward: The customized Reward function is elaborated in detail in the following subsection C. §.§ Reward Function For training the models, this study used the reward function already present in the highway-env package (referred to as the baseline reward and is illustrated in the middle of Fig <ref>) and the own modified and upgraded reward function. The model performances were compared to demonstrate that the upgraded reward is better than the baseline reward. During the training, it is observed that in the early stages, the trained agent car would sometimes run off the road. To make the training more efficient handling the off-road driving and stimulate the agent returning to on-road, one specific contribution in this study is that to adjust the distance measure between the agent and the lane, together with construct the lane heading difference measure illustrated in the following paragraphs. Let c denote the ego car agent and ℒ the corresponding lane. A lane is a collection of lane points l∈ℒ. Now define l' as the lane point with the shortest Euclidean distance to the car, meaning l' := _l∈ℒ d(c,l') and define the orientation ω of the car c with respect to a lane point l as follows ω(c,l)= 1 if car is located left of l -1 otherwise More precisely; if the car is located left of the tangent line for the lane segment containing l Then, this study defines the distance between the ego car and the lane as the shortest distance from the ego car c, to any point l on lane ℒ, meaning d(c,ℒ) = ω(c,l') d(c,l') Car heading and lane point heading are denoted by c_φ and l_φ respectively, both values are within angle range (-π, π]. Now, the lane heading difference (LHD) is defined as LHD = l_φ - c_φ + 2π if l_φ - c_φ < -π l_φ - c_φ - 2π if l_φ - c_φ > π l_φ - c_φ otherwise An important remark to this setup is the fact that if sgn(LHD) ·sgn(d(c,ℒ))<0 then the car is heading for the lane. Similarly, if sgn(LHD) ·sgn(d(c,ℒ))>0 the car is deviating (further) from the lane. Finally, denote the velocity of the ego car c by c_v, the reward function R:ℝ^3 →ℝ, with regard to state S, is defined as R_S(c,ℒ)= cos(|LHD|)· c_v/20·max(1,|d(c,ℒ|) if c_v≥ 0 0 otherwise where LHD is the lane heading difference between the ego car and the closest lane point. However, if the car crashes during the simulation, the reward is automatically set as -10, regardless of the state. The reward function, as defined in Equation <ref>, rewards the car for its `effective' speed on the road, defined by the cosine of the angular difference between the direction the car is driving in and the direction in which the road goes, multiplied by the speed of the car. With this design, both an increase in the driving speed and driving in line with the road heading will result in high rewards. Moreover, the value is divide by the lane offset to punish the car for driving off-road and also divide by 20 to scale the reward function to remain close to 1 under optimal circumstances. §.§ DRL Algorithms Regarding DRL algorithms, TRPO <cit.> and DQN <cit.> were customized and implemented. Details of the DRLs including hyperparameter settings are elaborated in the supplementary at https://drive.google.com/drive/folders/1IrXCxATJucIpF3RUARRVpJfQUITcJ3z8https://shorturl.at/oLP57, while Section <ref> presents the results comparing trained DRLs' performances. §.§ Evaluation of the Models To evaluate and compare the model performance, one need a set of indicators and metrics. So this study implemented a performance logger that measures and stores various indicators when testing a model in a given environment. These indicators are measured for a set amount of runs and the logger then prints the average values over all runs. The measured indicators are: 1) Speed, 2) Peak jerk, 3) Total jerk, 4) Total distance, 5) Total steering, 6) Running time, 7) Lane time (rate of time the car is running within the road), and 8) Rate of collision. The jerk is defined as the difference between the current and the previous action of a vehicle, consisting of both the steering angle and the acceleration. The magnitude of the total jerk reflects the degree to which the vehicle's motion changes abruptly and frequently, where a higher value of the total jerk implies a less comfortable driving. The jerk is defined by equations in <ref>: J_acceleration = a_t-1 - a_t/a_max - a_min J_steering = w_t - w_t-1/w_max - w_min J_total = J_acceleration + J_steering/2 The total steering is defined as the total amount of steering the car performs in the course of an evaluation, measured in angles. A higher amount of steering could, to certain extent, imply less efficient driving with unnecessary steering. The onlane rate is defined as the amount of time the evaluated car spends driving on the lane, divided by the total amount of time the car spends driving. The collision rate is defined as the total amount of collisions the car makes, divided by the total amount of evaluation trials. § EXPERIMENTS This study conducted intensive experiments to train and evaluate DRL models using TRPO and DQN algorithms on four environments provided by highway-env, and also the newly self-designed ComplexRoads. The models were trained using both the original standard reward function provided by highway-env (wich served as the baseline) and the customized reward function. The hyper-parameters used for training can be found in the appendix at https://drive.google.com/drive/folders/1IrXCxATJucIpF3RUARRVpJfQUITcJ3z8https://shorturl.at/oLP57. The models were trained on the supercomputer Delft Blue <cit.>. For every environment, ten models were trained and saved for 10,000 and 100,000 iterations. When finishing training, the model performance was tested for 10 runs. During the performance testing, constraints such as a maximum running time, minimum speed and if a crash had occurred were adopted. To obtain an overall assessment, the average of all these 10 testing results was calculated. To get an idea of how well the models perform regarding an uniform driving model, they were not only tested in their trained environments, but also cross-evaluated in other different environments. With the cross-evaluating, the effectiveness of the newly designed environment ComplexRoads can be verified. The experiment testing results are summarized and discussed in Section <ref>. § RESULTS AND DISCUSSION Tables <ref>, <ref>, <ref>, <ref>, and <ref> present the average performances of the DRL models trained on five environments and evaluated on the same respective environment. For every model variant in one specific environment, this study trained it for 10 times and also evaluated it for 10 times to get the average performance indicators. This paper writes “1*" when the number is rounded to 1, but not quite equal to 1. With the letters “B” and “M”, this paper refers to whether the baseline reward function or modified reward function was used in training the model. Meanwhile, Tables <ref>, <ref>, <ref>, <ref>, <ref>, and <ref> present the average performances of the implemented DRL models trained in their own environment, but evaluated in other different environments. This is for evaluating how adaptive these models are. In order to save space, these tables leave out some of the `less important' indicators, which can be found in the appendix at https://drive.google.com/drive/folders/1IrXCxATJucIpF3RUARRVpJfQUITcJ3z8https://shorturl.at/oLP57. One need to note that for the environment of Merge and the self-designed ComplexRoads, no baseline reward functions are available, so only the models trained by the modified and upgraded reward (indicated with “-M") were evaluated. Also, for cross-environments evaluating, only models with the modified reward were evaluated. While there might be various ways to express that one model outperforms another, it is important to prioritize safety as the main concern. Therefore, the measured values that this study considers the most important are the onlane rate and the collision rate, which reflect the driving safety. Other values, such as speed or jerk, are of less importance but can be compared in cases where the onlane rate and collision rate are rather similar. From Tables <ref>, <ref>, <ref>, <ref> and <ref>, one can see that in most cases the DQN with modified reward function (DQN-M) and the TRPO with modified reward function (TRPO-M) outperform the DQN and TRPO models with the baseline reward functions, especially with regards to the onlane rate. Between the DQN and TRPO models, the models trained by TRPO tend to perform somewhat better in most cases. Furthermore, looking at Tables <ref>, <ref>, <ref>, <ref>, <ref> and <ref>, it is observed that the models trained on ComplexRoads indeed tend to perform better than the other models in the cross-evaluation, especially in keeping a high onlane rate. This is due to various traffic situations represented in the ComplexRoads environment, as well as the fact that the starting location of the car during training on ComplexRoads was randomized, meaning that the car can experience various driving situations. And this will prevent the model from merely `memorizing' the environment but instead better learning to master the maneuvers with interact to the randomly generated environments. Due to the size of ComplexRoads, training on it was very computationally intensive, especially with a large amount of simulated surrounding cars. Thus, this study opted to train the model with a relatively few surrounding cars, meaning that the model does not get to interact with other cars as often as in the other environments. Thus, it resulted in a higher collision rate when evaluated in the other environments with more surrounding cars. When the computational resource is abundant, by adding more surrounding cars into the ComplexRoads environment, this minor can be eliminated. All in all, it is verified that the designed ComplexRoads indeed contribute the training of a more flexible and adaptive driving model. All the testing scenarios and results are better demonstrated in the appendix with the demo videos also provided at https://drive.google.com/drive/folders/1IrXCxATJucIpF3RUARRVpJfQUITcJ3z8https://shorturl.at/oLP57. § CONCLUSIONS This study first summarized the utilization of DRL in every specific automated driving task, e.g.,lane-keeping, lane-changing, overtaking, ramp merging, and then customized and implemented two widely used DRLs, i.e., DQN and TRPO to tackle various driving maneuvers, and carried out a comprehensive evaluation and comparison on the model performance. Based on , modified and upgraded reward function was designed for training the DRL models. Furthermore, a new integrated training environment, ComplexRoads, was constructed, together with several built-in functions were upgraded. Through various experiments testing, it is verified that the model trained using the modified reward generally outperformed those with the original baseline reward, and the newly constructed ComplexRoads demonstrated effective performance in training a uniform driving model that can tackle various driving tasks rather than one specific maneuver. As a preliminary study, the findings in this paper will provide meaningful and instructive insights for future studies towards developing automated driving with DRL and simulation. One feature that was implemented by this study but was removed due to time constraints and lack of computational resource, was training the cars to reach a specific destination in the designed ComplexRoads environment, which requires more interactions of training and perhaps the implementation of a path finding and optimization. This might be interesting direction for further research. -20cm
http://arxiv.org/abs/2306.04125v1
20230607034450
Multimodal Fusion Interactions: A Study of Human and Automatic Quantification
[ "Paul Pu Liang", "Yun Cheng", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ]
cs.LG
[ "cs.LG", "cs.CL", "cs.HC" ]
[email protected], [email protected] Carnegie Mellon University Pittsburgh PA USA Multimodal Fusion Interactions: A Study of Human and Automatic Quantification]Multimodal Fusion Interactions: A Study of Human and Automatic Quantification Multimodal fusion of multiple heterogeneous and interconnected signals is a fundamental challenge in almost all multimodal problems and applications. In order to perform multimodal fusion, we need to understand the types of interactions that modalities can exhibit: how each modality individually provides information useful for a task and how this information changes in the presence of other modalities. In this paper, we perform a comparative study of how human annotators can be leveraged to annotate two categorizations of multimodal interactions: (1) partial labels, where different randomly assigned annotators annotate the label given the first, second, and both modalities, and (2) counterfactual labels, where the same annotator is tasked to annotate the label given the first modality before giving them the second modality and asking them to explicitly reason about how their answer changes, before proposing an alternative taxonomy based on (3) information decomposition, where annotators annotate the degrees of redundancy: the extent to which modalities individually and together give the same predictions on the task, uniqueness: the extent to which one modality enables a task prediction that the other does not, and synergy: the extent to which only both modalities enable one to make a prediction about the task that one would not otherwise make using either modality individually. Through extensive experiments and annotations, we highlight several opportunities and limitations of each approach and propose a method to automatically convert annotations of partial and counterfactual labels to information decomposition, yielding an accurate and efficient method for quantifying interactions in multimodal datasets. [500]Computing methodologies Machine Learning [ Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency ========================================================================== § INTRODUCTION A core challenge in multimodal machine learning lies in understanding the ways that different modalities interact with each other in combination for a given prediction task <cit.>. We define the study of multimodal fusion interactions as the categorization and measurement of how each modality individually provides information useful for a task and how this information changes in the presence of other modalities <cit.>. Learning complex interactions are often quoted as motivation for many successful multimodal modeling paradigms in the affective computing and multimodal interaction communities, such as contrastive learning <cit.>, modality-specific representations <cit.>, and higher-order interactions <cit.>. Despite progress in new models that seem to better capture various interactions from increasingly complex real-world multimodal datasets <cit.>, formally quantifying and measuring the interactions that are necessary to solve a multimodal task remains a fundamental research question <cit.>. In this paper, we perform a comparative study of how reliably human annotators can be leveraged to annotate different interactions in multimodal datasets (see Figure <ref>). We first start with a conventional method which we term partial labels, where different randomly assigned annotators annotate the task given only the first modality (y_1), only the second modality (y_2), and both modalities (y) <cit.>. Based on partial labels, we extend this idea to counterfactual labels, where the same annotator is tasked to annotate the label given the first modality (y_1), before giving them the second modality and asking them to explicitly reason about how their answer changes (y_1+2), and vice versa (y_2 and y_2+1) <cit.>. Additionally, we propose an alternative taxonomy of multimodal interactions grounded in information theory <cit.>, which we call information decomposition: decomposing the total information two modalities provide about a task into redundancy, the extent to which individual modalities and both in combination all give similar predictions on the task, uniqueness, the extent to which the prediction depends only on one of the modalities and not the other, and synergy, the extent to which task prediction changes with both modalities as compared to using either modality individually <cit.>. Information decomposition has an established history in understanding feature interactions in neuroscience <cit.>, physics <cit.>, and biology <cit.> since it exhibits desirable properties including disentangling redundancy and synergy, normalization with respect to the total information two features provide towards a task, and established methods for automatic computation. However, it remains a challenge to scale information decomposition to real-world high-dimensional and continuous modalities <cit.>, which has hindered its application in affective computing and multimodal interaction where video, audio, and text modalities are prevalent. To quantify information decomposition for real-world multimodal tasks, we propose a new human annotation scheme where annotators provide estimates of redundancy, uniqueness, and synergy when presented with both modalities and the label. We find that this method works surprisingly well with strong annotator agreement and self-reported annotator confidence. Finally, given the promises of information decomposition <cit.>, we additionally propose a scheme to automatically convert annotations of partial and counterfactual labels to information decomposition using an information-theoretic method <cit.>, which makes it compatible with existing methods of annotating interactions <cit.>. Through comprehensive experiments on multimodal analysis of sentiment, humor, sarcasm, and question-answering, we compare these methods of quantifying multimodal interactions and summarize our key findings. § RELATED WORK Multimodal fusion interactions have been studied based on the dimensions of response, information, and mechanics <cit.>. We define and highlight representative works in each category: Interaction response studies how the inferred response changes when two or more modalities are fused <cit.> (see Figure <ref>). For example, two modalities create a redundant response if the fused response is the same as responses from either modality or enhanced if the fused response displays higher confidence. Non-redundant interactions such as modulation or emergence can also happen <cit.>. Many of these terms actually started from research in human and animal communicative modalities <cit.> and multimedia <cit.>. Inspired by these ideas, a common measure of interaction response redundancy is defined as the distance between prediction logits using either feature <cit.>. This definition is also commonly used in minimum-redundancy feature selection <cit.>. Research in multimedia has also categorized interactions into divergent, parallel, and additive <cit.>. Finally, human annotations have been leveraged to investigate the effect of the cognitive load on multimodal redundancy <cit.>. This paper primarily focuses on interaction response since it is the one easiest understood and annotated by humans, but coming up with formal definitions and measures of other interactions are critical directions for future work. Interaction information investigates the nature of information overlap between multiple modalities. The information important for a task can be shared in both modalities, unique to one modality, or emerge only when both are present <cit.>. Information-theoretic measures naturally provide a mathematical formalism in the study of interaction information, for example through the mutual information between two variables <cit.>. In the presence of a label, several extensions of two-way mutual information to three variables, such as through total correlation <cit.> interaction information <cit.>, or partial information decomposition <cit.> have been proposed. From a semantic perspective, research in multimedia has studied various relationships that can exist between images and text <cit.>, which has also inspired work in representing shared information through contrastive learning <cit.>. Finally, the study of interaction mechanics aims to define useful operators to integrate modalities to capture various interactions. For example, interaction mechanics can be expressed as additive <cit.>, multiplicative <cit.>, tensor <cit.> and non-linear <cit.> forms, as well as from a semantic perspective where two elements interact through logical, causal, or temporal operations <cit.>. By making assumptions on a specific functional form of interactions (e.g., additive vs non-additive), prior work has been able to quantify their presence or absence <cit.> on real-world multimodal datasets and models through various gradient-based visualizations <cit.> and projections into simpler models <cit.>. § We will first review the classic approach of directly estimating the partial labels via random assignment, before discussing an alternative approach through counterfactual labels. Finally, we motivate both automatic and human annotation of information decomposition into redundant, unique, and synergistic interactions. §.§ The standard approach involves tasking randomly assigned annotators to label their prediction of the label when presented with only the first modality (y_1), the label when presented with only the second modality (y_2), and label when presented with both modalities (y) <cit.>. Annotators are typically randomly assigned to each modality so that their labeling process is not influenced by observing other modalities, resulting in completely independently annotated partial labels. In this setup, the instructions given are: * y_1: Show modality 1, and ask the annotator to predict the label. * y_2: To another annotator, show only modality 2, and ask the annotator to predict the label. * y: To yet another annotator, show both modalities, and ask the annotator to predict the label. After reporting each partial label, the annotators are also asked to report confidence on a 0-5 scale (0: no confidence, 5: high confidence). We show a screenshot of a sample user interface in Figure <ref> (top) and provide more annotation details in Appendix <ref>. §.§ As another alternative to random assignment, we draw insight from counterfactual estimation where the same annotator is tasked to annotate the label given a single modality, before giving them the second modality and asking them to explicitly reason about how their answer changes. The instructions provided to the first annotator are: * y_1: Show modality 1, and ask them to predict the label. * y_1+2: Now show both modalities and ask if their predicted label explicitly changes after seeing both modalities. To a separate annotator, we provide the following instructions: * y_2: Show modality 2, and ask them to predict the label. * y_2+1: Now show both modalities and ask if their predicted label explicitly changes after seeing both modalities. The annotators also report confidence on a 0-5 scale (see sample user interface in Figure <ref> (middle) and exact annotation procedures in Appendix <ref>). While the first method by random assignment estimates the average effect of each modality on the label as is commonly done in randomized control trials <cit.> (since estimates of partial labels for each modality are done separately in expectation over all users), this counterfactual approach measures the actual causal effect of seeing the second modality towards the label for the same user <cit.>. §.§ Finally, we propose an alternative categorization of multimodal interactions based on information theory, which we call information decomposition: decomposing the total information two modalities provide about a task into redundancy, the extent to which individual modalities and both in combination all give similar predictions on the task, uniqueness, the extent to which the prediction depends only on one of the modalities and not the other, or synergy, the extent to which task prediction changes with both modalities as compared to using either modality individually <cit.>. This view of interactions is useful since it has a formal grounding in information theory <cit.> and information decomposition <cit.>. Information theory formalizes the amount of information that a variable (X) provides about another (Y), and is quantified by Shannon's mutual information (MI): I(X_1; X_2) = ∫ p(x_1,x_2) logp(x_1,x_2)/p(x_1) p(x_2) d x_1 d x_2, which measures the amount of information (in bits) obtained about X_1 by observing X_2. By extension, conditional MI is the expected value of the MI of two random variables (e.g., X_1 and X_2) given the value of a third (e.g., Y): I(X_1;X_2|Y) = ∫ p(x_1,x_2|y) logp(x_1,x_2|y)/p(x_1|y) p(x_2|y) d x_1 d x_2 dy. §.§.§ Multivariate information theory While information theory works well in two variables, the extension of information theory to measure redundancy and other interactions requires its extension to three or more variables, which remains an open challenge. The most natural extension, through interaction information <cit.>, has often been indirectly used as a measure of redundancy in co-training <cit.> and multi-view learning <cit.>. It is defined for three variables as the difference in mutual information and conditional mutual information: I(X_1; X_2; Y) = I(X_1; X_2) - I(X_1; X_2|Y), and can be defined inductively for more than three variables. However, interaction information has some significant shortcomings: I(X_1; X_2; Y) can be both positive and negative, leading to considerable difficulty in its interpretation when redundancy as an information quantity is negative <cit.>. Furthermore, the total information is only equal to redundancy and uniqueness (I(X_1,X_2; Y) = I(X_1; X_2; Y) + I(X_1; Y|X_2) + I(X_2; Y|X_1)), and there is no measurement of synergy in this framework. §.§.§ Information decomposition Partial information decomposition (PID) <cit.> was designed to solve some of the issues with multivariate information theory. PID does so by positing a decomposition of the total information 2 variables X_1,X_2 provide about a task Y into 4 quantities: redundancy R between X_1 and X_2, unique information U_1 in X_1 and U_2 in X_2, and synergy S. Information decomposition should satisfy certain properties, as shown in Figure <ref> and in the following equations: R + U_1 = I(X_1; Y), R + U_2 = I(X_2; Y), U_1 + S = I(X_1; Y | X_2), U_2 + S = I(X_2; Y | X_1), R - S = I(X_1; X_2; Y). PID is appealing since it resolves the issue of negative redundancy I(X_1; X_2; Y) in conventional information theory by separating R and S such that R-S = I(X_1; X_2; Y), identifying that prior redundancy measures confound actual redundancy and synergy. Furthermore, if I(X_1; X_2; Y) = 0 then existing frameworks are unable to distinguish between positive values of true R and S canceling each other out, while PID separates and can estimate non-zero (but equal) values of both R and S. §.§.§ Annotating information decomposition While information decomposition has a formal definition and exhibits nice properties, it remains a challenge to scale information decomposition to real-world high-dimensional and continuous modalities <cit.> which has hindered its application to real-world multimodal tasks. To quantify information decomposition for real-world tasks, we investigate whether human judgment can be used as a reliable estimator. We propose a new annotation scheme where we show both modalities and the label and ask each annotator to annotate the degree of redundancy, uniqueness, and synergy on a scale of 0-5 using the following definitions inspired by the formal definitions in information decomposition: * R: The extent to which using the modalities individually and together gives the same predictions on the task, * U_1: The extent to which x_1 enables you to make a prediction about the task that you would not if using x_2, * U_2: The extent to which x_2 enables you to make a prediction about the task that you would not if using x_1, * S: The extent to which only both modalities enable you to make a prediction about the task that you would not otherwise make using either modality individually, alongside their confidence in their answers on a scale of 0-5. We show a sample user interface for the annotations in Figure <ref> (bottom) and include exact annotation procedures in Appendix <ref>. § CONVERTING PARTIAL LABELS TO INFORMATION DECOMPOSITION Finally, we propose a method to automatically convert partial labels, which are present in many existing multimodal datasets <cit.>, into information decomposition interaction values. Intuitively, the partial and counterfactual labels are related to information decomposition response in the following ways: * R is high when y_1, y_2, and y are all close to each other, * U_1 is high when y_1 is close to y but y_2 is far from y, * U_2 is high when y_2 is close to y but y_1 is far from y, * S is high when y_1, y_2 are both far from y. While these partial labels are intuitively related to information decomposition, coming up with a concrete equation to convert y_1, y_2, and y to actual interaction values is surprisingly difficult and involves many design decisions. For example, what distance measure do we use to measure closeness in label space? Furthermore, computing R depends on 3 distances, U_1 and U_2 depend on 2 distances but inversely on 1 distance, and S depends on 2 distances. It is difficult to obtain interaction values that lie on comparable scales that can be compared reliably. §.§ Automatic conversion Our key insight is that the aforementioned issues are exactly what inspired much of the research in information theory and decomposition in the first place: in information theory, the lack of a distance measure is solved by working with probability distributions where information-theoretic distances like KL-divergence are well-defined and standardized, the issues of normalization are solved using a standardized unit of measure (bits in log-base 2), and the issues of incomparable scales are solved by the consistency equations (<ref>)-(<ref>) relating PID values to each other and to the total task-relevant information in both modalities. Armed with these formalisms of information theory and information decomposition, we propose a method to convert human-annotated partial predictions into redundancy, uniqueness, and synergy (see Figure <ref> for an overview). To do so, we treat the dataset of partial predictions 𝒟 = {(y_1,y_2,y)_i=1^n} as a joint distribution with y_1 and y_2 as `multimodal inputs' sampled over the label support 𝒴, and the target label y as the `output' also over 𝒴. Following this, we adopt a precise definition of redundancy, uniqueness, and synergy used by <cit.>, where the interactions are defined as the solution to the optimization problems: R = max_q ∈Δ_p I_q(Y_1; Y_2; Y), U_1 = min_q ∈Δ_p I_q(Y_1; Y | Y_2), U_2 = min_q ∈Δ_p I_q(Y_2; Y| Y_1), S = I_p(Y_1,Y_2; Y) - min_q ∈Δ_p I_q(Y_1,Y_2; Y), where Δ_p = { q ∈Δ: q(y_i,y)=p(y_i,y) ∀ y_i,y∈𝒴, i ∈ [2] } and the notation I_p(·) and I_q(·) disambiguates MI under joint distributions p and q respectively. The key difference in this definition of PID lies in optimizing q ∈Δ_p to satisfy the marginals q(y_i,y)=p(y_i,y), but relaxing the coupling between y_1 and y_2: q(y_1,y_2) need not be equal to p(y_1,y_2). The intuition behind this is that one should be able to infer redundancy and uniqueness given only access to separate marginals p(y_1,y) and p(y_2,y), and therefore they should only depend on q ∈Δ_p which match these marginals. Synergy, however, requires knowing the coupling p(y_1,y_2), and this is reflected in equation (<ref>) depending on the full p distribution. §.§ These optimization problems can be solved accurately and efficiently using convex programming. Importantly, the q^* that solves (<ref>)-(<ref>) can be rewritten as the solution to the max-entropy optimization problem: q^* = _q ∈Δ_p H_q(Y | Y_1, Y_2) <cit.>. Since the support of the label space 𝒴 is usually small and discrete for classification, or small and continuous for regression, we can represent all valid joint distributions q(y_1,y_2,y) as a set of tensors Q of shape |𝒴| × |𝒴| × |𝒴| with each entry representing Q[i,j,k] = q(Y_1=i,Y_2=j,Y=k). The problem then boils down to optimizing over tensors Q that are valid joint distributions and that match marginals over each modality and the label (i.e., making sure Q ∈Δ_p). Given a tensor parameter Q, our objective is H_Q(Y | Y_1, Y_2), which is concave. This is therefore a convex optimization problem and the marginal constraints can be written as linear constraints. Given a dataset D = {(y_1,y_2,y)_i=1^n}, p(y_1,y) and p(y_2,y) are first estimated before enforcing q(y_1,y) = p(y_1,y) and q(y_2,y) = p(y_2,y) through linear constraints: the 3D-tensor Q summed over the second dimension gives q(y_1,y) and summed over the first dimension gives q(y_2,y). Our final optimization problem is given by _Q H_Q(Y | Y_1, Y_2), such that ∑_y_2 Q = p(y_1,y) ∑_y_1 Q = p(y_2,y), Q ≥ 0, ∑_y_1,y_2,y Q = 1. Since this is a convex optimization problem with linear constraints, CVXPY <cit.> returns the exact answer Q^* efficiently. Plugging the learned Q^* into equations (<ref>)-(<ref>) yields the desired estimates for redundancy, uniqueness, and synergy. Therefore, this estimator can be helpful in automatically converting partial labels annotated by humans in existing multimodal datasets <cit.> into information decomposition interactions, yielding consistent, comparable, and standardized estimates. § EXPERIMENTS In this section, we design experiments to compare the annotation of multimodal interactions via randomized partial labels, counterfactual labels, and information decomposition into redundancy, uniqueness, and synergy. §.§ Experimental setup §.§.§ Datasets and tasks Our experiments involve a large collection of datasets spanning the language, visual, and audio modalities across affective computing and multimedia. We summarize the datasets used in Table <ref> and provide more details here: 1. VQA 2.0 <cit.> is a balanced version of the popular VQA <cit.> dataset by collecting complementary images such that every question is associated with a pair of similar images that result in two different answers to the same question. This reduces the occurrence of spurious correlations in the dataset and enables the training of more robust models. 2. CLEVR <cit.> is a dataset for studying the ability of multimodal systems to perform visual reasoning. It contains 100,000 rendered images and about 853,000 unique automatically generated questions that test visual reasoning abilities such as counting, comparing, logical reasoning, and memory. 3. MOSEI is a collection of 22,000 opinion video clips annotated with labels for subjectivity, sentiment intensity, per-frame, and per-opinion annotated visual features, and per-milliseconds annotated audio features <cit.>. Sentiment intensity is annotated in the range [-3,+3]. Each video is collected from YouTube with a focus on video blogs which reflect the real-world distribution of speakers expressing their behaviors through monologue videos. 4. UR-FUNNY is a dataset of humor detection in human speech <cit.> since humor is an inherently multimodal communicative tool involving the effective use of words (text), accompanying gestures (visual), and prosodic cues (acoustic). UR-FUNNY consists of more than 16,000 video samples from TED talks covering speakers from various backgrounds, ethnic groups, and cultures. 5. MUStARD is a multimodal video corpus for research in sarcasm detection <cit.> compiled from popular TV shows including Friends, The Golden Girls, The Big Bang Theory, and Sarcasmaholics Anonymous. MUStARD consists of 690 audiovisual utterances annotated with sarcasm labels. Sarcasm requires careful modeling of complementary information, particularly when the information from each modality does not agree with each other. Overall, the datasets involved in our experiments cover diverse modalities such as images, video, audio, and text, with prediction tasks spanning humor, sarcasm sentiment, emotions, and question-answering from affective computing and multimedia. §.§.§ Annotation details Participation in all annotations was fully voluntary and we obtained consent from all participants prior to annotations. The authors manually took anonymous notes on all results and feedback in such a manner that the identities of annotators cannot readily be ascertained directly or through identifiers linked to the subjects. Participants were not the authors nor in the same research groups as the authors, but they all hold or are working towards a graduate degree in a STEM field and have knowledge of machine learning. None of the participants knew about this project before their session and each participant only interacted with the setting they were involved in. We sample 10 datapoints from each of the 5 datasets in Table <ref> and give them to different groups of 3 annotators for each of the following studies: (1) annotating partial labels via random assignment, (2) annotating counterfactual labels, and (3) annotating information decomposition. We summarize the results and key findings: §.§ We show the agreement scores of partial and counterfactual labels in Table <ref> and note some observations below: * Comparing partial with counterfactual labels: Counterfactual label agreement (0.74) is higher than randomized label agreement (0.58). In particular, annotating the video-only modality (y_1) for video datasets in the randomized setting appears to be confusing with an agreement of only 0.51. We hypothesize that this is due to the challenge of detecting sentiment, sarcasm, and humor in videos without audio and when no obvious facial expression or body language is shown. Furthermore, we observe an increase in confidence in predicting the label when adding the second modality in the counterfactual setting versus showing both modalities upfront in the randomized setting: 4 vs 3.28. * Agreement and confidence datasets: We examined the agreement for each dataset in the randomized and counterfactual settings respectively. In both settings, we found MOSEI is the easiest dataset with the highest agreement of 0.75, 0.60, 0.65 for annotating y_1, y_2, and y and 0.88, 0.66, 0.83, 0.91 for annotating y_1, y_1+2, y_2, and y_2+1. Meanwhile, MUStARD is the hardest, with the agreement as low as -0.21, 0.04, and 0.17 in the randomized setting. The average confidence for annotating partial labels is actually high (above 3.5) for all datasets except unimodal predictions for VQA and CLEVR, which is as low as 0.43 and 0.33. This is understandable since these two image-based question-answering tasks are quite synergistic and cannot be performed using only one of the modalities, and annotator confidence when seeing both modalities is a perfect 5/5. * Effect of counterfactual order: Apart from a slight decrease in agreement in labeling y_1 first then y_1+2 and the slight increase in agreement in y_2 then y_2+1, we do not observe a significant difference caused by the counterfactual order. This is confirmed by the qualitative feedback from annotators: one responded that they found no difference between both orders and were confused why they were tasked to perform both. Overall, we find that while both partial and counterfactual labels are reasonable choices for quantifying multimodal interactions, the annotation of counterfactual labels yields higher agreement and confidence than partial labels via random assignment. §.§ We now turn our attention to annotating information decomposition. Referencing the average annotated interactions in Table <ref> with agreement scores in Table <ref>, we explain our findings regarding annotation quality and consistency. We also note qualitative feedback from annotators regarding any challenges they faced. * General observations on interactions, agreement, and confidence: The annotated interactions align with prior intuitions on these multimodal datasets and do indeed explain the interactions between modalities, such as VQA and CLEVR with significantly high synergy, as well as language being the dominant modality in sentiment, humor, and sarcasm (high U_1 values). Overall, the Krippendorff's alpha for inter-annotator agreement in directly annotating the interactions is quite high (roughly 0.7 for each interaction) and the average confidence scores are also quite high (above 4 for each interaction), indicating that the human-annotated results are reasonably reliable. * Confusion regarding uniqueness and synergy in video datasets: We observe that there is some confusion between uniqueness in the language modality and synergy in the video datasets, resulting in cases of low agreement in annotating U_1 and S: -0.09, -0.07 for MOSEI, -0.14, -0.03 for UR-FUNNY and -0.08, -0.04 for MUStARD respectively. We believe this is due to subjectivity in interpreting whether sentiment, humor, and sarcasm are present in language only or present only when contextualizing both language and video. * Confusion regarding information decomposition in non-video datasets: On non-video datasets, there are cases of disagreement due to the subjective definitions of information decomposition. For example, there was some confusion regarding VQA and CLEVR, where images are the primary source of information that must be selectively filtered by the question. This results in response synergy but information redundancy and uniqueness. One annotator consistently annotated high visual uniqueness as the dominant interaction, while the other two recognized synergy as the dominant interaction. The agreement of annotating synergy is consequently low (-0.04 for VQA). * On presence vs absence of an attribute: We further investigated the difference between agreement and confidence in the presence or absence of an attribute (e.g., humor or sarcasm). Intuitively, the presence of an attribute is clearer: taking the example of synergy, humans can judge that there is no inference of sarcasm from text only and there is no inference of sarcasm from the visual modality only, but there is sarcasm when both modalities interact together <cit.>. Indeed, we examined videos that show and do not show an attribute separately and found in general, humans reached higher agreement on annotating attribute-present videos. The agreement of annotating S is 0.13 when the attribute is present, compared to -0.10 when absent. Overall, we find that while annotating information decomposition can perform well, there are some sources of confusion regarding certain interactions and during the absence of an attribute. §.§ Converting partial and counterfactual labels to information decomposition Finally, we present results on converting partial and counterfactual labels into interactions using our information-theoretic method (PID). We report these results in Table <ref> in the rows called Partial+PID and Counterfactual+PID, and note the following: * Partial+PID vs counterfactual+PID: In comparing conversions on both partial and counterfactual labels, we find that the final resulting interactions are very consistent with each other: the highest interaction is always the same across the datasets and the relative order of interactions are also maintained. * Comparing with directly annotated interactions: In comparison to the interaction that human annotators rate as the highest, PID also assigns the largest magnitude to the same interaction (S for VQA 2.0 and CLEVR, U_1 for UR-FUNNY and MUStARD), so there is strong agreement. For MOSEI there is a small difference but both R and U_1 are almost equally high when annotated by humans, which matches the highest estimated R by PID. * Normalized comparison scale: Observe that the converted results fall into a new scale and range, especially for the MOSEI, UR-FUNNY, and MUStARD video datasets. This is expected since PID conversion inherits the properties of information theory where R+U_1+U_2+S add up to the total information that the two modalities provide about a task, indicating that the three video datasets are more subjective and are harder to predict. * Propagation of subjectivity: On humor and sarcasm, the subjectivity in initial human partial labeling can be propagated when we subsequently apply automatic conversion - after all, we do not expect the automatic conversion to change the relative order apart from estimating interactions in a principled way. Therefore, we believe that the conversion method we proposed is a stable method for estimating information decomposition, combining human-in-the-loop labeling of partial labels (which shows high agreement and scales to high-dimensional data) with information-theoretic conversion which enables comparable scales, normalized values, and well-defined distance metrics. §.§ An overall guideline Given these findings, we summarize the following guidelines for quantifying multimodal fusion interactions: * For modalities and tasks that are more objective (e.g., visual question answering), direct annotation of information decomposition is a reliable alternative to conventional methods of partial and counterfactual labeling to study multimodal interactions. * For modalities and tasks that may be subjective (e.g., sarcasm, humor), it is useful to obtain counterfactual labels before using PID conversion to information decomposition values, since counterfactual labeling exhibits higher annotator agreement while PID conversion is a principled method to obtain interactions. § CONCLUSION Our work aims to quantify various categorizations of multimodal interactions using human annotations. Through a comprehensive study of partial labels, counterfactual labels, and information decomposition, we elucidated several pros and cons of each approach and proposed a hybrid estimator that can convert partial and counterfactual labels to information decomposition interaction estimates. On real-world multimodal fusion tasks, we show that we can estimate interaction values accurately and efficiently which paves the way towards a deeper understanding of these multimodal datasets. Limitations and future work: The annotation schemes in this work are limited by the subjectivity of the modalities and task. Automatic conversion of partial labels to information decomposition requires the label space to be small and discrete (i.e., classification), and does not yet extend to regression or text answers unless approximate discretization is first performed. Finally, future work can also scale up the human annotations to more datapoints and fusion tasks, and ask annotators to provide their explanations for ratings that have a low agreement. § ACKNOWLEDGEMENTS This material is based upon work partially supported by Meta, National Science Foundation awards 1722822 and 1750439, and National Institutes of Health awards R01MH125740, R01MH132225, R01MH096951 and R21MH130767. PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship. RS is supported in part by ONR N000141812861, ONR N000142312368 and DARPA/AFRL FA87502321015. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NIH, Meta, Carnegie Mellon University’s Center for Machine Learning and Health, ONR, DARPA, or AFRL, and no official endorsement should be inferred. Finally, we would also like to acknowledge NVIDIA’s GPU support. ACM-Reference-Format § HUMAN ANNOTATION DETAILS §.§ Annotating partial labels We asked 3 annotators to predict the partial labels in a randomized setting. For each annotator, we asked them to annotate y_1 then y_2 given only modality 1 or 2 respectively, and finally y given both modalities. This completion order is designed on purpose to minimize possible memorization of the data so that the annotators can provide completely independent unimodal and multimodal predictions on the label. When annotating the visual modality of the video datasets, we explicitly require the annotators to mute the audio and predict the partial labels based only on the video frames. After that, all annotators are asked to provide a confidence score on a scale of 0 (no confidence) to 5 (high confidence) about their annotations. The confidence scale is applied to all annotation settings below. We aggregated annotator A's y_1 response, annotator B's y_2 response, and annotator C's y response as one set of complete partial labels. Similarly, we collected B's y_1, C's y_2, and A's y as the second set, C's y_1, A's y_2, and B's y as the third set. §.§ Annotating counterfactual labels We asked 6 annotators to predict the counterfactual labels in this setting. For each group of 2 annotators, we asked the first annotator to annotate partial labels y_1 given only the first modality and provide confidence scores, then presented them with the other modality and asked for their new predictions y_1+2 and corresponding confidence ratings. We asked the second annotator to predict y_2 similarly with only the second modality and then y_2+1 with both modalities presented. §.§ We asked 3 annotators to directly annotate the information decomposition values. Given both modalities, each annotator is asked to provide a rating on a scale of 0 (none at all) to 5 (large extent) for the following questions that correspond to R, U_1, U_2, and S respectively: * The extent to which both modalities enable them to make the same predictions about the task; * The extent to which modality 1 enables them to make a prediction that they would not if using modality 2; * The extent to which modality 2 enables them to make a prediction that they would not if using modality 1; * The extent to which both modalities enable them to make a prediction that they would not if using either modality individually. Finally, they are asked to rate their confidence for each rating they provided.
http://arxiv.org/abs/2306.06381v1
20230610083916
INK: Injecting kNN Knowledge in Nearest Neighbor Machine Translation
[ "Wenhao Zhu", "Jingjing Xu", "Shujian Huang", "Lingpeng Kong", "Jiajun Chen" ]
cs.CL
[ "cs.CL" ]
Hypoxia-related radiotherapy resistance in tumours: treatment efficacy investigation in an eco-evolutionary perspective. Giulia Chiari^*,1,2,3, Giada Fiandaca^4, Marcello Delitala^1 July 31, 2023 ======================================================================================================================== Neural machine translation has achieved promising results on many translation tasks. However, previous studies have shown that neural models induce a non-smooth representation space, which harms its generalization results. Recently, kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference. Despite promising results, kNN-MT usually requires large inference overhead. We propose an effective training framework to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters. The new parameters are then used to refresh the whole representation datastore to get new kNN knowledge asynchronously. This loop keeps running until convergence. Experiments on four benchmark datasets show that achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02× memory space and 1.9× inference speedup[Code will be released at <https://github.com/OwenNJU/INK>]. § INTRODUCTION Neural machine translation (NMT) have achieved promising results in recent years <cit.>. The target of NMT is to learn a generalized representation space to adapt to diverse scenarios. However, recent studies have shown that neural networks, such as BERT and GPT, induce non-smooth representation space, limiting the generalization abilities <cit.>. In NMT, we also observe a similar phenomenon in the learned representation space where low-frequency tokens disperse sparsely, even for a strong NMT model (More details are described in Section Experiments). Due to the sparsity, many “holes” could be formed. When it is used to translate examples from an unseen domain, the performance drops sharply <cit.> Recently, k-Nearest-Neighbor Machine Translation (kNN-MT) <cit.> provides an effective solution to smooth predictions by equipping an NMT model with a key-value datastore. For each entry, the value is the target token and key is the contextualized representation at the target position. It requires a training set to record tokens and representations. By aggregating nearest neighbors during inference, the NMT model can achieve decent translation results <cit.>. Despite the success, kNN-MT also brings new issues with the increasing scale of training data. Retrieving neighbors from a large datastore <cit.> at each decoding step is time-consuming <cit.>. Furthermore, once the datastore is constructed, representations can not be easily updated, limiting the performance ceiling of kNN-MT. Given above strengths and weaknesses of kNN-MT, we propose to directly smooth the representation space with a small number of parameters. In this paper, we propose a training framework , to iteratively refine the representation space with the help of extracted kNN knowledge (Fig. <ref>). Specifically, we adjust the representation distribution by aligning three kinds of representations with Kullback-Leibler (KL) divergence to train a small number of adaptation parameters. First, we align the contextualized representation and its target embedding to keep semantic meanings. Second, we align the contextualized representations of a target token and align the extracted kNN contextualized representations to address the sparsely dispersing problem. After a training epoch, we refresh the datastore asynchronously with refined models to update kNN representations. During inference, we only load the off-the-shelf NMT model and tune adaptation parameters. We conduct experiments on four benchmark datasets. Experiment results show that our framework brings average gains of 1.99 COMET and 1.0 BLEU. Compared with the state-of-the-art kNN-MT method (i.e. Robust kNN-MT; ), achieves better translation performance with 0.02× memory space and 1.9× inference speed. Our contributions can be summarized below: * We propose a training framework to smooth the representation space according to kNN knowledge. * We devise an inject-and-refine training loop in our framework. Experiments show that refreshing the datastore asynchronously matters. * Our system achieves promising improvements and beats the state-of-the-art kNN-MT system. § BACKGROUND This section briefly introduces the working process of kNN-MT and the architecture of adapter <cit.>. For the latter, we will use it to improve the representation space in our framework. §.§ kNN-MT Given an off-the-shelf NMT model ℳ and training set 𝒞, kNN-MT memorizes training examples explicitly with a key-value datastore 𝒟 and use 𝒟 to assist the NMT model during inference. Memorize representations into datastore Specifically, we feed training example (X, Y) in 𝒞 into ℳ in a teacher-forcing manner <cit.>. At time step t, we record the contextualized representation[By default, the last decoder layer's output is used as the contextualized representation of the translation context (X, Y_<t).] h_t as key and the corresponding target token y_t as value. We then put the key-value pair into the datastore. In this way, the full datastore 𝒟 can be created through a single forward pass over the training dataset 𝒞: 𝒟 = {(h_t, y_t)  | ∀ y_t ∈ Y, (X, Y)∈𝒞} where each datastore entry explicitly memorizes the mapping relationship between the representation h_t and its target token y_t. Translate with memorized representations During inference, the contextualized representation of the test translation context (X, Y_<t) will be used to query the datastore for nearest neighbor representations and their corresponding target tokens 𝒩_k={(ĥ, ŷ)}_1^k. Then, the retrieved entries are converted to a distribution over the vocabulary: p_knn(y|X, Y_<t) ∝∑_(ĥ, ŷ)∈𝒩_k1(y=ŷ) e^-d(h_t,ĥ)/T where h_t denotes h(X, Y_<t) for short, d measures Euclidean distance and T is the temperature. §.§ Adapter Previous research shows that adapter can be an efficient plug-and-play module for adapting an NMT model <cit.>. In common, the adapter layer is inserted after each encoder and decoder layer of ℳ. The architecture of the adapter layer is simple, which includes a feed-forward layer and a normalization layer. Given the output vector z∈ℛ^d of a specific encoder/decoder layer, the computation result of the adapter layer can be written as: z = W_2^T [W_1^T · f(z)] + z where f denotes layer-normalization, W_1∈ℛ^d× d', W_2∈ℛ^d' × d are two projection matrices. d' is the inner dimension of these two projections. Bias term and activation function is omitted in the equation for clarity. z is the output of the adapter layer. § APPROACH: INK This section introduces our training framework . The key idea of the proposed approach is to use kNN knowledge to smooth the representation space. The training process is built on a cycled loop: extracting kNN knowledge to adjust representations via a small adapter. The updated parameters are then used to refresh and refine the datastore to get new kNN knowledge. We define three kinds of alignment loss to adjust representations, which are described in Section <ref>, Section <ref>, and Section <ref>. An illustration of the proposed framework is shown in Figure <ref>. §.§ Align Contextualized Representations and Token Embeddings The basic way to optimize the adapter to minimize the KL divergence between the NMT system's prediction probability p_nmt and the one-hot golden distribution p_gold: ℒ^a_t = D_KL[ p_gold(y|X, Y_<t)∥ p_nmt(y|X, Y_<t) ] = -log∑_(w, v)∈ℰ1(v=y_t)κ(h_t, w)/∑_(w,v)∈ℰκ(h_t, w) where ℰ is the embedding matrix. w and v denote the token embedding and its corresponding token respectively. h_t denotes the contextualized representation h(X, Y_<t). y_t denotes the target token. κ(h_t, w)=e^h_t^T w. Following the widely-accepted alignment-and-uniformity theory <cit.>, this learning objective aligns the contextualized representation h_t with the tokens embedding of its corresponding target token. §.§ Align Contextualized Representations and kNN Token Embeddings Previous research in kNN-MT has shown that the nearest neighbors in the representation space can produce better estimation via aggregating kNN neighbors <cit.>. Apart from the reference target token, the retrieval results provide some other reasonable translation candidates. Taking the translation case in Figure <ref> as an example, retrieval results provide three candidate words, where both “happens” and “occurs” are possible translations. Compared with the basic one-hot supervision signal, the diverse kNN knowledge in the datastore can be beneficial for building a representation space with more expressive abilities. Therefore, we extract kNN knowledge by using the contextualized representation h_t to query the datastore for nearest neighbors 𝒩_k={(ĥ, ŷ)}_1^k (illustrated in Fig. <ref>). For more stable training, we reformulate the computation process of kNN distribution as kernel density estimation (KDE) <cit.>. Formulation The general idea of KDE is to estimate the probability density of a point by referring to its neighborhood, which shares the same spirit with kNN-MT. The computation of kNN distribution can be written as: p_knn(y|X,Y_<t) = ∑_(ĥ, ŷ)∈𝒩_k1(y=ŷ)κ(h_t,ĥ)/∑_(ĥ, ŷ)∈𝒩_kκ(h_t,ĥ) where κ can be set as any kernel function. Thus, Equation <ref> can be seen as a special case of Equation <ref> by setting κ(·, ·)=e^-d(·, ·)/T. After extracting kNN knowledge, we use it to smooth the representation space by by minimizing the KL divergence between the kNN distribution p_knn and NMT distribution p_nmt: ℒ^i_t = D_KL[ p_knn(y|X, Y_<t)∥ p_nmt(y|X, Y_<t) ] = -∑_y̅∈𝒴 p_knn(y̅)·log∑_(w, v)∈ℰ1(v=y̅)κ(h_t, w)/∑_(w,v)∈ℰκ(h_t, w)· p_knn(y̅) where 𝒴 denotes identical tokens in nearest neighbors 𝒩_k and p_knn(y̅) denotes p_knn(y=y̅|X, Y_<t) for short. ℰ is the embedding matrix. w and v denote the token embedding and its corresponding token respectively. h_t denotes h(X, Y_<t) for short. κ is the kernel function. Following the widely-accepted alignment-and-uniformity theory <cit.>, this learning objective encourages h_t to align with the embeddings of retrieved reasonable tokens, e.g., “occurs”, “happens”. §.§ Align Contextualized Representations of the Same Target Token Although kNN knowledge could provide fruitful translation knowledge, it is also sometimes noisy <cit.>. For example, in Figure <ref>, the retrieved word “works” is a wrong translation here. To address this problem, we propose to adjust local representation distribution. Specifically, our solution is to optimize the kNN distribution towards the reference distribution by minimizing the KL divergence between the gold distribution p_gold and kNN distribution p_knn. Thanks to the new formulation (Eq. <ref>), we can choose kernel function here to achieve better stability for gradient optimization. In the end, we find that exponential-cosine kernel works stably in our framework: κ(h, h_t)=e^cos(h, h_t) Therefore, the loss function can be written as: ℒ^r_t = D_KL[ p_gold(y|X, Y_<t)∥ p_knn(y|X, Y_<t) ] = -log∑_(ĥ, ŷ)∈𝒩_k1(ŷ=y_t)κ(h_t,ĥ)/∑_(ĥ, ŷ)∈𝒩_kκ(h_t,ĥ) where 𝒩_k is the retrieved k nearest neighbors. ĥ and ŷ denotes the neighbor representations and the corresponding target token. h_t denotes h(X, Y_<t) for short. Following the widely-accepted alignment-and-uniformity theory <cit.>, this learning objective aligns the contextualized representation of the same target token. With this goal, we can make the kNN knowledge less noisy in the next training loop by refreshing the datastore with the updated representations. §.§ Overall Training Procedure The combined learning objective To summarize, we adjust representation space via a small adapter with the combination of three alignment loss ℒ_t^a, ℒ_t^i, ℒ_t^r. Given one batch of training examples ℬ={(X,Y)}, the learning objective is minimizing the following loss: ℒ = 1/|ℬ|∑_(X,Y)∈ℬ∑_t(ℒ_t^a + αℒ^i_t + βℒ^r_t) where α, β is the interpolation weight. We notice that, in general, all three learning objective pull together closely related vectors and push apart less related vectors in the representation space, which has an interesting connection to contrastive learning <cit.> by sharing the similar goal. Refresh datastore asynchronously In our training loop, once the parameters are updated, we refresh the datastore with the refined representation. In practice, due to the computation cost, we refresh the datastore asynchronously at the end of each training epoch to strike a balance between efficiency and effectiveness As the training reaches convergence, we drop the datastore and only use the optimized adapter to help the off-the-shelf NMT model for the target domain translation. § EXPERIMENTS §.§ Setting We introduce the general experiment setting in this section. For fair comparison, we adopt the same setting as previous research of kNN-MT <cit.>, e.g., using the same benchmark datasets and NMT model. For training , we tune the weight α and β among {0.1, 0.2, 0.3}. More implementation details are reported in the appendix. Target Domain Data We use four benchmark German-English dataset (Medical, Law, IT, Koran) <cit.> and directly use the pre-processed data[<https://github.com/zhengxxn/adaptive-knn-mt>] released by <cit.>. Statistics of four datasets are listed in Table <ref>. NMT Model We choose the winner model[<https://github.com/facebookresearch/fairseq/tree/main/examples/wmt19>] <cit.> of WMT'19 German-English news translation task as the off-the-shelf NMT model for translation and datastore construction, which is based on the big Transformer architecture <cit.>. Baselines For comparison, we consider three kNN-MT systems, which use datastore in different fashions. We report the translation performance of the adapter baseline to show the effectiveness of our training framework. Besides, we report the translation performance of kNN-KD, which is another work using kNN knowledge to help NMT. * V-kNN <cit.>, the vanilla version of k-nearest-neighbor machine translation. * A-kNN <cit.>, an advanced variants of kNN-MT, which dynamically decides the usage of retrieval results and achieve more stable performance. * R-kNN <cit.>, the state-of-the-art kNN-MT variant, which dynamically calibrates kNN distribution and control more hyperparameters, e.g. temperature, interpolation weight. * Adapter <cit.>, adjusting representation by simply align contextualized representation and token embeddings. * kNN-KD <cit.>, aiming at from-scratch train a NMT model by distilling kNN knowledge into it. Metric To evaluate translation performance, we use the following two metrics: * BLEU <cit.>, the standard evaluation metric for machine translation. We report case-sensitive detokenized sacrebleu[<https://github.com/mjpost/sacrebleu>]. * COMET <cit.>, a recently proposed metric, which has stronger correlation with human judgement. We report COMET score computed by publicly available wmt20-comet-da[<https://github.com/Unbabel/COMET>] model. Approximate Nearest Neighbor Search We follow previous kNN-MT studies and use Faiss[<https://github.com/facebookresearch/faiss>] index <cit.> to represent the datastore and accelerate nearest neighbors search. Basically, the key file can be removed to save memory space once the index is built. But, it is an exception that R-kNN relies on the key file to re-compute accurate distance between query representation and retrieved representations. §.§ Main Results We conduct experiments to explore the following questions to better understand the effectiveness of our proposed framework and relationship between two ways of smoothing predictions: * RQ1: Can we smooth the representation space via small adapter and drop datastore aside during inference? * RQ2: How much improvement can be brought by using kNN knowledge to adjust the representation distribution? * RQ3: Will together using adapter and datastore bring further improvement? system achieves the best performance by smoothing the representation space Table <ref> presents the comparison results of different systems. Due to the poor quality of representation space, the off-the-shelf NMT model does not perform well. The performance of kNN-KD is unstable, e.g., it performs poorly on IT dataset. kNN-MT systems generate more accurate translation. Among them, R-kNN achieves the best performance, which is consistent with previous observation <cit.>. Our system achieves the best translation performance with the least memory space. Compared with the strongest kNN-MT system, i.e. R-kNN, INK achieves better performance on three out of four domains (Medical, IT, Koran). In average, outperforms R-kNN with an improvement of 4.84 COMET and 0.31 BLEU while occupying 0.02× memory space. Representation refinement according to kNN knowledge brings large performance improvement In Table <ref>, compared with the adapter baseline that simply align the contextualized representations and word embeddings, outperforms it by 1.99 COMET and 1.00 BLEU in average, which demonstrates the effectiveness of adjusting representation distribution with kNN knowledge. To better show the effect of framework, we use adapters of different sizes to refine the representation space. Figure <ref> shows the BLEU scores and added memory of different systems on four datasets. We can see that representation-refined system occupies much less memory than the datastore-enhanced system. In general, systems locates on the top-right of each figure, which means that achieves higher BLEU scores with less memory space. In most cases, outperforms adapter with a large margin, which demonstrates the superiority of our training framework. Jointly applying adapter and datastore can further smooth predictions Given the fact that both and datastore can smooth predictions, we take a step further and explore to use them together as a hybrid approach. Specifically, on top of our system, we follow the fashion of R-kNN to use an additional datastore to assist it during inference. Experiment results are shown in Figure <ref>. On three out of four datasets, we can observe further improvements over . On the Law dataset, the performance improvement even reaches 4.19 BLEU. On the Medical and IT dataset, the performance improvement is 0.71 BLEU and 0.79 BLEU respectively. Such phenomenon indicates that the representation space of the NMT model is not fully refined by the adapter. If a more effective framework can be designed, the benefit of smoothing representation space will be further revealed. The results on the Koran dataset is an exception here. We suggest that it is because of the sparse training data, which makes it difficult to accurately estimate kNN distribution during inference. § ANALYSIS AND DISCUSSION We conduce more analysis in this section to better understand our system. greatly refines the representation space of the NMT model Inspired by <cit.>, we evaluate the quality of the representation space by computing mean kNN accuracy, which measures the ratio of k-nearest representations sharing the same target token with the query representation. Ideally, all of the representations in a neighborhood should share the same target token. Here, we use the contextualized representations from the unseen development set as the query. For each query, the nearest representations from the training set will be checked. Table <ref> shows the evaluation results on medical dataset. achieves higher accuracy than the NMT model consistently. For low frequency tokens, the representation quality gap is especially large. Ablation study To show the necessity of different proposed techniques in our framework, we conduct ablation study in this section. In Table <ref>, we can see that keeping the datastore frozen degenerates the translation performance most, which demonstrates the necessity of refreshing datastore asynchronously during training. Removing either of the two alignment loss (ℒ^i_t and ℒ^r_t) would cause the translation performance to decline, which validates their importance for adjusting the representation distribution. enjoys faster inference speed After refining the representation space, our adapted system no longer need to querying datastore during inference. We compare the inference speed [We evaluate the inference speed on a single NVIDIA Titan-RTX.] of and R-kNN. Considering that decoding with large batch size is a more practical setting <cit.>, we evaluate their inference speed with increasing batch sizes. To make our evaluation results more reliable, we repeat each experiment three times and report averaged inference speed. Table <ref> shows the results. As the decoding batch size grows, the speed gap between the two adapted system becomes larger. Our can achieve up to 1.9× speedup. Besides, due to the fact that neural parameters allows highly parallelizable computation, the inference speed of may be further accelerated in the future with the support of non-autoregressive decoding <cit.>. § RELATED WORK Nearest Neighbor Machine Translation kNN-MT presents a novel paradigm for enhancing the NMT system with a symbolic datastore. However, kNN-MT has two major flaws: (1) querying the datastore at each decoding step is time consuming and the datastore occupies large space. (2) the noise representation in the datastore can not be easily updated, which causes the retrieval results to include noise. Recently, a line of work focuses on optimizing system efficiency. <cit.> and <cit.> propose to prune datastore entries and conduct dimension reduction to compress the datastore. <cit.> propose to in-advance narrow down the search space with word-alignment to accelerate retrieval speed. <cit.> propose to retrieve a chunk of tokens at a time and conduct retrieval only at a few decoding steps with a heuristic rule. However, according to their empirical results, the translation performance always declines after efficiency optimization. To exclude noise in the retrieval results, <cit.> propose to dynamically decide the usage of retrieved nearest neighbors with a meta-k network. <cit.> propose to dynamically calibrate the kNN distribution and control more hyperparameters in kNN-MT. <cit.> propose to build datastore with more powerful pre-trained models, e.g. XLM-R <cit.>. However, all of this methods rely on a full datastore during inference. When the training data becomes larger, the inference efficiency of these approaches will becomes worse. Overall, it remains an open challenge to deploy a high-quality and efficient kNN-MT system. Using kNN knowledge to build better NMT models As datastore stores a pile of helpful translation knowledge, recent research starts exploring to use kNN knowledge in the datastore to build a better NMT model. As an initial attempt, <cit.> try to from scratch train a better NMT model by distilling kNN knowledge into it. Different from their work, we focus on smoothing the representation space of an off-the-shelf NMT model and enhancing its generalization ability via a small adapter. Besides, in our devised inject-and-refine training loop we keep datastore being asynchronously updated, while they use a fixed datastore. § CONCLUSION In this paper, we propose a novel training framework , to iteratively refine the representation space of the NMT model according to kNN knowledge. In our framework, we devise a inject-and-refine training loop, where we adjust the representation distribution by aligning three kinds of representation and refresh the datastore asynchronously with the refined representations to update kNN knowledge. Experiment results on four benchmark dataset shows that system achieves an average gain of 1.99 COMET and 1.0 BLEU. Compared with the state-of-the-art kNN system (Robust kNN-MT), our also achieves better translation performance with 0.02× memory space and 1.9× inference speed up. § LIMITATION Despite promising results, we also observe that refreshing and querying the datastore during training is time-consuming. Our proposed training framework usually takes 3× ∼ 4× training time. In future work, we will explore methods to improve training efficiency. We include a training loop to dynamically use the latest datastore to inject knowledge into neural networks. However, we still find that the kNN knowledge still helps the inference even after our training loops, demonstrating that there still remains space to improve the effectiveness of knowledge injection. § ACKNOWLEDGEMENT We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by National Science Foundation of China (No. 62176120), the Liaoning Provincial Research Foundation for Basic Research (No. 2022-KF-26-02). acl_natbib § USED SCIENTIFIC ARTIFACTS Below lists scientific artifacts that are used in our work. For the sake of ethic, our use of these artifacts is consistent with their intended use. * Fairseq (MIT-license), a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization and other text generation tasks. * Faiss (MIT-license), a library for approximate nearest neighbor search. § IMPLEMENTATION DETAILS We reproduce baseline systems with their released code. We implement our system with fairseq <cit.>. Adam is used as the optimizer and inverse sqrt is used as the learning rate scheduler. We set 4k warm-up steps and a maximum learning rate as 5e-4. We set batch size as 4096 tokens. All systems are trained on a single Tesla A100. During inference, we set beam size as 4 and length penalty as 0.6.
http://arxiv.org/abs/2306.09513v1
20230615211542
Second order quantitative bounds for unadjusted generalized Hamiltonian Monte Carlo
[ "Evan Camrud", "Alain Oliviero Durmus", "Pierre Monmarché", "Gabriel Stoltz" ]
math.PR
[ "math.PR", "cs.NA", "math.NA" ]
Open charm phenomenology with a multi-stage approach to relativistic heavy-ion collisions Charles Gale =========================================================================================   This paper provides a convergence analysis for generalized Hamiltonian Monte Carlo samplers, a family of Markov Chain Monte Carlo methods based on leapfrog integration of Hamiltonian dynamics and kinetic Langevin diffusion, that encompasses the unadjusted Hamiltonian Monte Carlo method. Assuming that the target distribution π satisfies a log-Sobolev inequality and mild conditions on the corresponding potential function, we establish quantitative bounds on the relative entropy of the iterates defined by the algorithm, with respect to π. Our approach is based on a perturbative and discrete version of the modified entropy method developed to establish hypocoercivity for the continuous-time kinetic Langevin process. As a corollary of our main result, we are able to derive complexity bounds for the class of algorithms at hand. In particular, we show that the total number of iterations to achieve a target accuracy ε >0 is of order d/ε^1/4, where d is the dimension of the problem. This result can be further improved in the case of weakly interacting mean field potentials, for which we find a total number of iterations of order (d/ε)^1/4. § INTRODUCTION We consider in this paper the problem of sampling from a target distribution π. This problem is ubiquitous in various fields such as statistical physics <cit.>, statistics <cit.>, and machine learning <cit.>. However, in most applications, the distribution π has a density with respect to a dominating measure known up to an intractable multiplicative constant. Markov chain Monte Carlo methods are now a family of popular algorithms for solving this problem. They consist in designing a Markov chain associated with a Markov kernel for which π is an invariant distribution. One of the best known MCMC instances is the family of Metropolis-Hastings algorithms <cit.>. In the case where the target distribution has a smooth and positive density with respect to the Lebesgue measure on ^d, denoted by π still, another class of MCMC algorithms is based on discretizations of continuous-time stochastic dynamics <cit.>. Famous examples of such MCMC methods are the Unadjusted Langevin Algorithm (ULA) <cit.> and Stochastic Gradient Langevin Dynamics <cit.>, which are based on the overdamped Langevin diffusion. Here we consider numerical schemes based on Hamiltonian-type dynamics, i.e., ideal Hamiltonian Monte Carlo and kinetic (or underdamped) Langevin diffusion. Although these two dynamics are different, they have many features in common. For example, both define an extended process on ^2d that has the product of π and the standard Gaussian distribution as invariant measure, denoted hereafter by μ. Also, the infinitesimal generators of these extended processes differ only in their symmetric part, while their antisymmetric part is the same and corresponds to the Hamiltonian dynamics associated with the potential U associated with π, , π∝exp(-U). From this observation it follows that common discretization strategies have been employed for both dynamics. Among these methods, those based on splitting techniques <cit.> are particularly attractive since they come with valuable properties and important convergence guarantees. In this paper, we are particularly interested in the family of splitting methods known as generalized Hamiltonian Monte Carlo (gHMC). This method has been shown to yield weak second order errors. More precisely, denoting by _ the Markov kernel associated with gHMC, where are the hyperparameters of the algorithm including the step size δ > 0 and the integration time >0, under suitable conditions, it holds (see e.g. <cit.>) that for any sufficiently regular function f : ^2d→, number of iterations k ∈ and initial distribution ν_0, |ν_0 _^k f - π(f)|≤ C_f(^-c_1 k + δ^2), for some constants C_f ≥ 0 and c_1 > 0 independent of δ. This result illustrates the advantages of using well-chosen splitting strategies compared to traditional Euler schemes, for which similar conclusions can be drawn but with a second term of lower order with respect to δ, typically linear. While weak error bounds already provide significant convergence guarantees, another line of research is concerned with establishing quantitative bounds for MCMC algorithms, paying particular attention to the dimension dependence <cit.>. Regarding the kinetic Langevin algorithm, existing works <cit.> analyze for most of them a modification of the Euler scheme. For this particular algorithm, <cit.> shows that when π satisfies a log-Sobolev inequality and under additional conditions on U, denoting by ν_k the distribution of the k-th iterate of the algorithm starting from ν_0 with step size δ > 0, for each k ∈, (ν_k | π) ≤ C_ν_0 (^-c_1 δ k + δ^2), for some constants C_ν_0≥ 0, c_1 > 0 independent of δ, where denotes the relative entropy or Kullback Leibler divergence. Note that the Pinsker inequality then implies the same type of bounds for the total variation distance, but with a second term of order δ. Quantitative bounds for splitting schemes of the kinetic Langevin diffusion are scarcer and up to our knowledge. Higher order quantitative bounds are established in <cit.> in this context for the case when the potential U is convex. On the other hand, regarding unadjusted HMC, combining results from <cit.> and <cit.> implies that for a fixed integration time >0 there exists c_1 > 0 such that for any k ∈ and initial distribution ν_0, 𝐖_1(ν̃_k, π) ≤ C_ν_0(^-c_1 k + δ^2) for some C_ν_0≥ 0, where ν_k is the distribution of the k-th iterates of unadjusted HMC starting from ν_0, and 𝐖_1 denotes the Wasserstein distance of order 1. This result once again highlights the improved accuracy of the leapfrog integrator. Finally, the analysis of gHMC in the Wasserstein distance has been conducted in <cit.>, but only for the strongly convex scenario. The main contribution of this paper is to extend and generalize the results we just mentioned and to analyze gHMC in the non-convex scenario. In particular, we show that gHMC achieves higher order accuracy than traditional Euler schemes in relative entropy, under the condition that π satisfies a log-Sobolev inequality, and additional relatively mild assumptions on the potential U. Roughly speaking, we establish a bound of the form (ν_0 ^k | μ) ≤ C_ν_0^-c_1 k + C_2δ^4 for some explicit C_ν_0,C_2 ≥ 0 and c_1 > 0. Our approach is based on a perturbative argument of the modified entropy approach initiated in <cit.>, and more precisely to its recent discrete-time variation in <cit.> for idealized gHMC. From our main result, we derive bounds on the total number of iterations k and a step size δ to achieve (ν_0 ^k | μ) ≤ for some > 0, where two cases can be distinguished. We obtain a bound on the number of gradient computations of order (d/^1/4) in the general case and an improved bound of order ((d/)^1/4) in a weakly interactive mean-field regime. The paper is organized as follows. The family of (unadjusted) generalized Hamiltonian Monte Carlo samplers is introduced in Section <ref>. Our main assumptions and results are stated in Section <ref>. They are discussed and compared to previous works in Section <ref>. Section <ref> is devoted to the proof of Theorem <ref>. Other more technical proofs are postponed to an Appendix. Notation. We denote by |x| the Euclidean norm of x ∈^d. We denote by ^k(^d) the set of functions from ^d to with continuous derivatives up to order k. For f∈^k(^d) and x∈^d, f stands for the gradient of f and ^k f, the k-th derivative of f. In addition, |^k f(x)| is the multilinear operator norm of ^k f(x) with respect to the Euclidean norm on ^d and ^k f_∞ = sup_x∈^d|^k f(x)|. For a d× d matrix , || stands for the operator norm on ^d with respect to the Euclidean norm. For a differentiable map Φ:^d →^p and x∈^d we write Φ(x) = (∂_x_iΦ_j(x))_i∈ 1,d,j∈ 1,p (where i stands for the line and j for the column). This notation is such that Φ∘Ψ = ΨΦ∘Ψ and is consistent in the case p=1 (since Φ is in that case the gradient of Φ). Notice that Φ is the transpose of what is most commonly named the Jacobian matrix of Φ. stands for the Lebesgue measure, N(0,) for the d-dimensional standard Gaussian distribution and 𝒫(^d) for the set of distribution of ^d endowed with its Borel σ-field denoted by ℬ(^d). We write ν≪μ as ν is absolutely continuous with respect to μ. We define the relative entropy (or Kullback-Leibler [KL-] divergence) and the Fisher information of ν_1 ∈() with respect to ν_2 ∈(), by ℋ(ν_1|ν_2) = ∫lnν_1/ν_2ν_2 if ν_1 ≪ν_2 + ∞ otherwise, ℐ(ν_1|ν_2) = ∫| ∇lnν_1/ν_2|^2 ν_1 if ν_1 ≪ν_2 + ∞ otherwise, where for a measurable function f: →, | f| is defined for any z ∈^d as | f|(z) = lim_r↓ 0 sup{|f(z)-f(y)|/|z-y| : y∈, 0<|y-z|⩽ r} . If f is continuously differentiable, note that ∇ f(z) is simply the norm of ∇ f(z) and therefore our notation is consistent. We write i,j the integer interval {i,…,j} for i,j∈ℤ. § NON-ASYMPTOTIC BOUNDS FOR SPLITTING SCHEMES FOR HAMILTONIAN TYPE DYNAMICS §.§ Splitting schemes for Hamiltonian type dynamics – gHMC Recall that we assume that the target distribution π admits a positive density with respect to the Lebesgue measure π∝exp(-U). In addition, we suppose the following condition on U. The potential U ∈^4(^d), and it exists L>0 such that |^2 U(x)| ⩽ L for any x∈^d. We introduce now more formally the two continuous dynamics that we are considered in this paper: Hamiltonian dynamics and kinetic (also referred to as underdamped) Langevin dynamics. As previously mentioned, these two dynamics leave the extended target distribution μ = π⊗(0,) invariant. Hamiltonian dynamics associated with the potential U defines the differential flow (ψ_t)_t ≥ 0 for any t ∈_+ and x,v ∈^d, as ψ_t(x,v)=(x_t,v_t) where (x_t,v_t)_t ≥ 0 is the solution of the Hamiltonian differential equation: ∂_t x_t = v_t , ∂_t v_t = -∇ U(x_t) , with (x_0,v_0)=(x,v) . As discussed in <cit.>, (ψ_t)_t ≥ 0 preserves μ, , if (X_0,V_0) has distribution μ then so does ψ_t(X_0,V_0) for all t ≥ 0, but the trajectory (ψ_t(X_0,V_0))_t ≥ 0 is not ergodic. Indeed, starting from any fixed initial conditions x_0,v_0∈, it remains in the corresponding level set {(x,v) ∈ : H(x,v) = H(x_0,v_0)} of the Hamiltonian function H defined by H(x,v) = U(x) + v^2/2 . To address this issue, a velocity randomization (or refreshment) at time >0 can be added. To this end, consider the Markov operator _η given, for η∈0,1 and any measurable and bounded function f :→, by _ηf(x,v) = ∫_^d fx, η v + √(1-η^2) g(g) g , where stands for the density with respect to of N(0,). Then, under mild assumptions, the resulting kernel, _η,f(x,v) = _η[f∘ψ_](x,v) , is ergodic with respect to μ. Recently, <cit.> used hypocoercivity techniques from <cit.> to show exponential convergence of _η, in a modified entropy with respect to μ. The second process which shares some important features with Hamiltonian dynamics is the underdamped Langevin diffusion: _s = _s s , _s = - U(_s) s - γ_s s + √(2γ) B_s , where (B_s)_s ≥ 0 is a d-dimensional Brownian motion and γ>0 is a damping parameter. It can easily be shown that under mild assumptions on U, the Hamiltonian dynamics with a suitably scaled refreshment weakly converges to (<ref>): for any s ≥ 0, x,v∈^d and smooth and bounded function f : →, lim_δ→ 0_η_δ,δ^s/δf(x,v) = _s f(x,v) where η_δ = ^-δγ, and (_s)_s≥ 0 is the Markov semigroup associated with (<ref>). We now present a family of splitting schemes which encompasses discretizations for both Hamiltonian and underdamped Langevin dynamics. This family of algorithms will be referred to as generalized Hamiltonian Monte Carlo (gHMC). This family is based on the Verlet discretization of (<ref>) defined at time k δ, k ∈ for a stepsize δ >0 by (x_k,v_k) = Φ^k_δ(x_0,v_0) where Φ^k_δ = Φ^k-1_δ∘Φ_δ, Φ^0_δ is the identity function and Φ_δ = Φ^(v)_δ/2∘Φ^(x)_δ∘Φ^(v)_δ/2, Φ^(v)_δ/2(x,v) = v -(δ/2) ∇ U(x) , Φ^(x)_δ(x,v) = x + δ v . It is well-known that the integrator Φ_δ is simplectic and reversible (see <cit.>), , it is invertible with inverse Φ_δ^-1 = ∘Φ_δ∘, (x,v) = (x,-v) . The gHMC algorithm then consists of the composition of a (possibly partial) velocity refreshment with a K-step Verlet scheme. Setting = (K,δ,η), this corresponds to the inexact Markov chain Monte Carlo (MCMC) method with Markov kernel specified by = _η_δ^K , where _δ corresponds to the deterministic kernel _δ((x,v),·) = _Φ_δ(x,v)(·) , for x,v ∈^d. For η=0 and K δ = for an integration time >0, _ corresponds to the usual unadjusted Hamiltonian Monte Carlo algorithm which is a discretized version of the ideal Hamiltonian dynamics _t,η defined by (<ref>). On the other hand, taking K=1 and the damping parameter η_δ defined in (<ref>), _ corresponds to a second order splitting scheme of (<ref>) as considered for example in <cit.>. Alternatively, taking η=0 and K=1 leads to the Euler scheme with step size δ^2/2 for the overdamped Langevin process. Finally under <Ref>, we write = δ K√(L), = (1-η)/ , which respectively represent, suitably scaled, the physical time of integration of the Hamiltonian dynamics by the Verlet integrator, and the strength of the damping. As discussed in <cit.>, rescaling the process so that L=1 (see Section <ref>), 1/ is the time needed for the process to forget its initial velocity, and is thus the typical distance covered by one of its coordinates in a single ballistic run in flat parts of the space. §.§ Main result We start by stating additional assumptions regarding π and U for our main results. First, we assume that π satisfies a log-Sobolev inequality with constant ≥ 0, i.e., for any ν∈(), ℋ(ν|π) ⩽ℐ(ν|π) . This holds for instance if U is uniformly convex outside some compact set. Various estimates are available for , e.g. in mean-field, convex or low-temperature cases, which can grasp more specific information on the target π that uniform bounds on the curvature x-y U(x)- U(y) /|x-y|^2 that are used in direct coupling methods. Notably, in non-convex cases, for a fixed dimension d, up to polynomial terms in the other parameters, the log-Sobolev constant is of order ^c_* where c_* is the so-called critical height of U (see <cit.> and references within). Part of our analysis consists in controlling some numerical errors of the leapfrog integrator, i.e., for example differences of the Hamiltonian function evaluated at the dynamics at time 0 and s >0 starting from the k-th iterates of the gHMC chain. To this end, we have to suppose additional regularity conditions on U. More specifically, we require uniform bounds on the third and fourth derivatives of U which is stated as follows: There exist two norms _3 and _4 on ^d such that for all x,y,z∈^d, ^2 U(x+y) - ^2 U(x) z ⩽_3(y)_3(z) U(x+y) - U(x) - 1/2^2 U(x)+^2 U(x+y) y ⩽_4^3(y) . Using a third order Taylor expansion of U, (<ref>) is equivalent to a uniform bound on the fourth derivative of U (see Proposition <ref> below). <Ref> may be relaxed, i.e, if only bounded third derivative for U, our main results still hold but with worse dependencies with respect to the step size, see Remark <ref> below. Bounding the left-hand sides in (<ref>) and (<ref>) with respect to the Euclidean distance on ^d may lead to sub-optimal dependencies in the dimension d in <Ref> below (for instance when U is separable, see Theorem <ref> below), which is why we allow in <Ref> some flexibility in the choice of the norms. As we will see, the numerical errors that we have to control at a point (x,v)∈^2d can be bounded by the function 𝐌 (x,v) = L^2 |v|^2 + L^2 | U(x)|^2 + _3^4(v) + _3^4( U(x) ) + _4^6(v) + _4^6( U(x) ) . In order to control the expectation of these errors, it thus remains to establish some uniform moment bounds for the Markov kernel _. These estimates can typically be established under a Lyapunov condition, but for clarity we postpone this analysis (see Theorem <ref> below) and, for now, we state it as the next assumption. There exist ρ,C_1,C_2> 0 and : →_+ such that for all (x,v) ∈^2d, k∈{0,…,K} and n∈, _^n _η_δ^k (x,v) ≤ C_1 ^-ρ n (x,v) + C_2 , where is defined in (<ref>). In practice, this can be established by designing a suitable Lyapunov function such that 𝐌⩽ C_1 and _⩽^-ρ + C_2' , for some C_1,C_2',ρ>0. As we will see in Theorem <ref>, C_1,C_2,ρ typically only depends on the parameters through defined in (<ref>) (in particular they are uniform over all sufficiently small step-size δ). For the continuous-time kinetic Langevin diffusion and its discretizations, such Lyapunov functions have been designed in <cit.> under coercitivity conditions on U. We show in Appendix <ref> that appropriate modifications of these Lyapunov functions can be used to establish a drift condition (<ref>) for _. We can now state our main result. For the sake of clarity, we only consider the case where L=1 in <Ref>, the general case being obtained by rescaling as detailed at the end of this section, before Section <ref>. Assume <Ref>-<Ref> with L=1 and let =(K,δ,η), K ∈, δ >0 and η∈0,1 such that <Ref> holds and =Kδ⩽ 1/10. Furthermore, assume that π satisfies the log-Sobolev inequality (<ref>). Consider an initial condition ν_0 such that ℐ( ν_0|μ) < +∞ and ∫_^2dz^p ν_0(z) < for any p >0. Then, for any n∈, ℋν_0 ^n | μ⩽ (1+ κ)^-n{ℋ(ν_0|μ) + 2a ℐ(ν_0|μ)} + δ^4 M { nθ^n C_1ν_0( )+ C_2 ( κ^-1 +) } , where θ= max(^-ρ,(1+κ)^-1/) and a = /7+3(+3)^2 , κ = a/3max( ,1)+6 a , M = 5 + 1/3a . The right hand side in the first line of (<ref>) corresponds to the long-time convergence of the idealized HMC chain <cit.> (and goes to 0 as n→∞). The Fisher term ℐ(ν_0|μ) appearing there is due to the fact Theorem <ref> is proven using a modified entropy (defined in (<ref>)) instead of ℋ. The second line of (<ref>), of order δ^4 corresponds to the numerical error due to the Verlet integration which is decomposed into two terms. The first term reflects the fact that the initial condition is not invariant for _ and highlights the exponential forgetting of this initial condition, trough ν_0(), since θ<1. As for the second term involving C_2, it accounts for the numerical error at stationarity. Since, in the case L=1, is the physical integration time of the Hamiltonian dynamics, n is the total physical time of the simulation after n iterations. The constant κ is the convergence rate per unit physical time and depends on the parameters only through , with a dependency of order min(,^-1) similarly to the continuous-time Langevin diffusion. From this observation, it follows that the convergence per gradient computation is κ/K = κδ. At the expense of more involved expressions, we improve <Ref> in <Ref> below a little. In particular, we provide a slightly sharper value of κ, which is optimal in the overdamped case, i.e., considering an initial condition of the form ν_0 = ν̂_0 ⊗N(0,), taking in the statement of <Ref>, K=1, η=0, n=2s/δ^2 for some s>0 (since δ^2/2 is the effective step-size of the Euler scheme), ε = δ, a=/2, assuming that 1=L ⩾ 1/ (which can always be enforced since L is any upper bound of ^2 U_∞, and then we can rescale the chain to get L=1) and letting δ→ 0 in (<ref>) we obtain ℋ(ν̂_0 Q_s|π) ⩽^-s/ℋ(ν̂_0|π) , where (Q_t)_t⩾ 0 is the semi-group associated to the overdamped Langevin diffusion. This shows that our result is sharp since it implies that π satistifes a log-Sobolev inequality with constant (see e.g., <cit.>). As highlighted in Remark <ref>, if in <Ref> we only assume (<ref>) but not (<ref>) (i.e., we don't assume bounded fourth derivative of U), we get a similar result, except that the term δ^4 is replaced by δ^2. Let us now provide practical conditions on U to establish <Ref> and <Ref>. To ensure clarity, we assume that d= q for some ,q⩾ 1 and, decomposing x=(x_1,…,x_) ∈^d = (^q)^ that U is of the following form: U(x) = ∑_i=1^ (x_i) + ϵ/∑_i,j=1^(x_i-x_j) , where and are potentials defined on ^q, and ϵ>0. Such mean field potentials are commonly encountered in various applications (see e.g., <cit.>). Furthermore, in line with the approach taken in <cit.>, we consider this formulation as a representative example to demonstrate the dimension dependency in weakly correlated cases. It is important to note that (<ref>) always holds with the choice =1, =U, and =0. However, as we will demonstrate, this mean-field formulation allows us to precisely determine the dependence of C_1 and C_2 in <Ref> on the dimension d, particularly by distinguishing the roles of and q. For conditions under which π satisfies a log-Sobolev inequality with a constant independent of for U given by (<ref>) (provided ϵ is sufficiently small), see <cit.>. First, as elaborated in Appendix <ref>, we can readily verify the following: Assume that d= q for some ,q⩾ 1 and that U is of the form (<ref>) where U^(q),W^(q)∈𝒞^4(^q) have uniformly bounded third and fourth derivatives. Then <Ref> holds with N_3(x) = L_3 ( ∑_i=1^ |x_i|^4)^1/4 and N_4(x) = L_4 ( ∑_i=1^ |x_i|^6)^1/6, where L_3^4 = (1+ϵ) ^3 U^(q)_∞ ^2 + 16 (ϵ+ϵ^2) ^3 W^(q)_∞^2 , L_4^6 = 1+ϵ/144^4 U^(q)_∞^2 + 4(ϵ +ϵ^2) /9 ^4 W^(q)_∞^2 q . As a consequence, assuming furthermore that U^(q)(0) = 0 = W^(q)(0), the function 𝐌 (x,v) defined in (<ref>) is bounded as 𝐌 (x,v) ⩽∑_i=1^∑_ℓ∈{2,4,6} L_ℓ^ℓ ( r_ℓ |x_i|^ℓ + |v_i|^ℓ), with r_ℓ = 2^ℓ-1^2 U^(q)_∞^ℓ + 2^3ℓϵ^ℓ^2 W^(q)_∞^ℓ for ℓ∈{2,4,6} and L_2 = L. To establish moment bounds, we consider the following assumption: The dimension is d= q for some ,q⩾ 1 and U is of the form (<ref>) where U^(q),W^(q)∈𝒞^2(^q) have uniformly bounded second derivatives and are such that ∇ U^(q)(0) = ∇ W^(q)(0) = 0. In addition, there exist >0 and ≥ 0 such that for any x_1 ∈^q, x_1 ∇ U^(q)(x_1)≥x_1^2 -. Under <Ref>, for any _0 >0 there exist ,δ,C_1,C_2,ϵ,ρ >0 which depends on ^2 U^(q)_∞, ^2 W^(q)_∞,,,_0 (and not on q,,ϵ), such that the following holds. For all =(K,δ,η) with δ∈ (0,δ], η∈ [0,1), ∈(0, ] and = _0 (with and given by (<ref>), L=^2 U_∞), assuming furthermore that ϵ⩽ϵ and setting (x,v) =∑_ℓ∈{2,4,6} |x_i|^ℓ + |v_i|^ℓ , then, for all (x,v) ∈^2d, k∈{0,…,K} and n∈, _^n _η_δ^k (x,v) ≤C_1 ^-ρ n(x,v) + C_2 q^3 . Explicit expressions for C_1,C_2,ϵ,ρ are given in Appendix <ref>, where a more detailed result, Theorem <ref>, is stated and proven. As explained in Appendix <ref>, Theorem <ref> is a straightforward corollary of Theorem <ref>. Combining Proposition <ref> with Theorem <ref>, we get that <Ref> holds with given by (<ref>) and, writing R=max{L_ℓ^ℓmax(r_ℓ,1) : ℓ∈{2,4,6}}/q, C_1 = R C_1 q , C_2 = R C_2 q^4 . Treating ^k U^(q)_∞ and ^k W^(q)_∞ for k∈{2,3,4} as constants independent from q, we get that R is independent from and q. Rescaling. As mentioned above, following <cit.>, assuming L=1 in <Ref> is not a restriction. Indeed, if Z_n=(X_n,V_n) is an unadjusted gHMC chain with potential U and parameters K,δ,η, then (√(L) X_n,V_n) is an unadjusted gHMC chain associated with the potential Ũ(x)=U(x/√(L)) (so that Ũ is 1-Lipschitz under <Ref>) and parameters K,δ√(L),η. Moreover, if π satisfies a log-Sobolev inequality with constant , then the change of variable x ↦ x/√(L) implies that the rescaled target measure π̃∝^-Ũ satisfies a log-Sobolev inequality with constant L. Similarly, using the same change of variable, the relative entropy is invariant by scaling, namely, if (X,V) has distribution ν∈𝒫() and ν̃ stands for the law of (√(L)X,V), ℋν | μ = ℋν̃| μ̃, where μ̃= π̃⊗N(0,), while the Fisher information satisfies ℐν̃| μ̃ = ∫_^2dL^-1 |_x h|^2 + |_v h|^2/hμ , where h = ν/μ. §.§ Discussion and related works Regarding the dependency in the dimension of the constants appearing in <Ref>, in view of Remark <ref>, we distinguish two cases. In the first one, which we call the weakly interacting case, we consider that C_2 and ν_0( ) in (<ref>) are 𝒪(d). This corresponds in Remark <ref> to the case where q is fixed so that d is proportional to d_0. In the second case, which we call the general case, we consider that C_2 and ν_0( ) are 𝒪(d^4). This corresponds in Remark <ref> to the case d_0=1, d=q in Remark <ref>. In both settings, we consider that L, C_1 and are constants independent from the dimension, and that ℋ(ν_0|μ) and ℐ(ν_0|μ) are 𝒪(d) (see e.g. <cit.>). In this context, Theorem <ref> gives the following complexity bounds. Let ε>0 be a given error tolerance (in relative entropy). By choosing δ = 𝒪 ( ε^1/4 d^-1) in the general case we get that ℋ(ν_0 ^n|μ) ⩽ε after a number n K = 𝒪( d ^-1/4ln (d^-1) ) of computations of the gradient U. By choosing δ = 𝒪 ( ε^1/4d^-1/4) in the weakly interacting case we get that ℋ(ν_0 ^n|μ) ⩽ε after a number n K = 𝒪( (d/ )^1/4ln (d^-1) ) of computations of the gradient U. Note that in both cases, Pinsker and Talagrand inequalities imply μ-ν_0 ^n_TV + 𝐖_2μ,ν_0 ^n = 𝒪(ε^1/2), where ·_TV and 𝐖_2 denotes the total variation distance and Wasserstein distance of order 2. In view of this, the dependence of the number of gradient evaluation on ε^1/2, nK = 𝒪 (ε^-1/4) (omitting the logarithmic term), highlights that gMHC is a second-order splitting scheme and is consistent with existing weak error estimates; see <cit.> and references therein. Finally, for a fixed set of parameters , if ν_0 ^n converges weakly to a stationary distribution μ_, <Ref> yields to μ-μ__TV + 𝐖_2μ,μ_ = 𝒪(δ^2). Comparison with previous works. There is an important literature on the convergence of Markov chain Monte Carlo algorithms based on discretization of continuous-time dynamics <cit.>. Making a complete survey and a detailed description of these works is beyond the scope of the paper. Here, we focus on works establishing quantitative bounds for gHMC or some of its particular instances, and for discretization of the kinetic Langevin diffusion in entropy. * Entropy methods for discretization of the kinetic Langevin diffusion. An adaptation of the modified entropy strategy from <cit.> for continuous-time processes have been conducted by <cit.> for a slight modification of the Euler scheme of the kinetic Langevin process. More precisely, the scheme considered in <cit.> (introduced in <cit.> in the machine learning community, and referred to as the Stochastic exponential Euler scheme in <cit.> where earlier apparitions in the physics literature are mentioned <cit.>) is given by the solution of _t^δ = _t^δ t , _t^δ = - U(_δ⌈ t/δ⌉^δ) t - γ_t^δ t + √(2γ) B_t , which is similar to (<ref>) except that the force - U is constant within each time interval [kδ,(k+1)δ), k∈. Hence, in <cit.>, the approximate entropy dissipation is established through a continuous-time derivation. By contrast, we follow the discrete time computations of <cit.>, which is concerned with idealized gHMC where the exact Hamiltonian dynamics is performed instead of the Verlet integration (in particular we are not using the underdamped Langevin diffusion process as a continuous-time reference, since we also cover unadjusted HMC). Moreover, in terms of dependency with respect to the stepsize δ, the schemes of <cit.> are only first order (see the discussion in Section 3.3 of <cit.>, in particular Equation (14)), and we get an improvement from √(ε) to ε^1/4 in the complexity. Concerning the dimension d, the stepsize in <cit.> scales as d^-1/2L_H^-1, where L_H is the Lipschitz constant of ^2 U in the Frobenius norm, i.e., U is supposed to satisfy ^2 U(x)-^2 U(y)_F ⩽ L_H|x-y| for all x,y∈^d. Assuming that L_H is independent from d thus corresponds to the weakly interacting case, while the assumption that the derivatives of U are bounded independently in d (in terms of the Euclidean norm) leads to L_H of order √(d). We get an improvement from d^-1/2 to d^-1/4 in the first case and the same dependence d^-1 in the second. Notice that, in order to take advantage of an higher order numerical scheme, we have to assume that U has bounded first four derivatives, in contrast to <cit.> which only assumes that U has bounded first three derivatives. As discussed in Remark <ref>, if we consider this second conditions on U, a result similar to <cit.> and with the same dependence on ε and d, can be derived. The main takeaway of this observation is that gHMC can achieve better accuracy order if U is sufficiently smooth but do not yield worst complexity than the discretization (<ref>) obtained in <cit.> under the same conditions. Finally, up to our knowledge, our result is the first one based on entropy methods for unadjusted HMC, and for splitting discretization schemes of the underdamped Langevin diffusion. * HMC. Analyses of HMC for strongly convex potentials U have been conducted in <cit.>. For non-convex potentials, <cit.> have established several non-asymptotic convergence bounds for position HMC (i.e., η=0), both in Wasserstein 1 distance and total variation norm. In the mean-field case in particular, <cit.> shows that after 𝒪(√(/ε')) gradient evaluations, then the law of the chain is at distance at most ε' to π in term of the 𝐖_ℓ_1 Wasserstein distance associated to the ℓ_1 norm x-y_ℓ_1 = ∑_i=1^|x_i-y_i|. To compare this result with ours, we can bound this distance by √() times the standard 𝐖_2 Wasserstein distance (associated with the Euclidean distance), whose square is bounded by the relative entropy times the log-Sobolev constant of π (which, from the results of <cit.>, is independent from in the weakly interacting case where ϵ is small enough, which is the regime where this complexity is obtained in <cit.>). In other words, we should take ε' = √(ε) in the results of <cit.> for a fair comparison. In that case, the number of gradient computations required in <cit.> is 𝒪((/ε)^1/4) in the weakly interacting case, which is the same as our results. The approach in <cit.> is based on reflection couplings and concave modifications of the distance (building upon the method developed for continuous-time diffusion processes, particularly <cit.> for the overdamped Langevin process and <cit.> for the underdamped one). In particular, it doesn't provide a result for the relative entropy or the 𝐖_2 Wasserstein distance, and the bound on the convergence rate is expressed in terms of global bounds on x-y U(x)- U(y) /|x-y|^2. * Splitting schemes for kinetic Langevin diffusion for strongly convex potential U. Splitting schemes for Langevin diffusion with strongly convex potential have been investigated in <cit.>. Their conclusion on the complexity for the corresponding gHMC method is similar to ours. Finally, the analysis of the general gHMC methodology have been conducted in <cit.> under the same strong convexity condition. Once again, the same conclusions can be drawn As a conclusion, up to our knowledge, none of existing works cover our results, which give simple higher order explicit estimates in relative entropy uniformly over the class of unadjusted gHMC samplers and for non-convex potentials U. § PROOF OF THEOREM <REF> §.§ A more detailed result: : modified entropy approximate dissipation The proof of Theorem <ref> will be an easy consequence of a more detailed result that we now state. For ease of presentation, we first introduce some notations. Relative density and modified entropy. In the rest of the present Section <ref>, for an initial distribution ν_0 ∈𝒫() which satisfies the conditions in <Ref>, parameters = (K,δ,η) and n ∈, we denote by h_n = ν_0 _^n / μ, and write (h_n) = (ν_0 _^n | μ ) , (h_n) = (ν_0 _^n | μ) , where and denotes the relative entropy and Fisher information with respect to μ respectively. More generally, we denote by (h) = (ν | μ) and (h) = (ν |μ) for any positive density h : → with respect to μ and ν stands for the distribution with density h. In addition, for ∈^2d× 2d, or possibly a matrix field z ↦(z) on ^2d, define ℐ_(ν | μ) = ∫_^2d| ∇lnν/μ|^2 ν if ν≪μ + ∞ otherwise . Note that if ν≪μ and h= ν / μ, ℐ_(ν | μ) = ∫_^2d|∇ h|^2/hμ. Following the convention above, we write ℐ_(h) = ℐ_( ν |μ), for any positive density h : → with respect to μ with ν the distribution with density h. The main result of this section, from which we deduce <Ref>, establishes the dissipation in time of a suitable modified entropy up to some numerical error. More precisely, we consider for ν∈𝒫() and ã>0, ℒ_ã(ν |μ) = (h) + ãℐ_(h) , with = 1/√(2)[ 1 1; 1 1 ] . In the case ν≪μ with h = ν/μ, we have ℒ_ã(ν |μ) = ∫_^2d h ln (h) μ+ ã∫_^2d|_x h + _v h |^2/hμ , and following the same convention as previously, we write ℒ(h) = ℒ(ν | μ), for any positive density h : → with respect to μ with ν the distribution with density h. With this choice, ℒ_a involves a mixed derivative term _x h_v h in the Fisher part, which has been crucially used in hypocoercive studies of the continuous-time kinetic Langevin process <cit.>. More specifically, (<ref>) is exactly the modified entropy introduced in continuous-time settings in <cit.>. We are now ready to state the main result of this section. Assume <Ref>-<Ref> with L=1 and let =(K,δ,η), K ∈, δ >0 and η∈0,1 such that <Ref> holds and =Kδ⩽ 1/10. Furthermore, assume that π satisfies the log-Sobolev inequality (<ref>). Consider an initial condition ν_0 such that ℐ( ν_0|μ) < +∞ and ∫_^2dz^p ν_0(z) < for any p >0. Then, for any n∈, 1+ κℒ_a(h_n+1) ⩽ℒ_a(h_n) + δ^4 M C_1 ^-ρ ntν_0 + C_2 , where a,κ,M are given in (<ref>). Alternatively, define m_1 = 2 - 3 , m_2 = /η + 2 + 4 m_3 = (1+η)/η^21/2 +1 - 2 - 4 λ = m_1+m_3/2 - √(m_1+m_3/2 ^2 - m_1m_3 + m_2^2) if η >0 2 - 3 - 2/(1+2) if η =0. Then, for any ã >0 such that λ >0, for any ε∈(0,1) and n ∈, 1+ ℒ_(h_n+1) ⩽ℒ_(h_n) + δ^4 C_1 ^-ρ ntν_0 + C_2 , where = λ (1-ε)(1-3)/max(L ,1)+2 , = 1/ελ_-4/15 + 24 + ã (33 λ +4) +λ/240 . This result is proven below. First, let us deduce Theorem <ref> from Theorem <ref>: An easy induction and <Ref> imply that for any n⩾ 1, ℒ_a(h_n) ⩽ 1+ κ^-nℒ_a(h_0) + δ^4 M ∑_k=1^n 1+ κ^-kC_1^-ρ (n-k) ν_0 + C_2. Then, the proof is completed since ℋ(h_n) ⩽ℒ_a(h_n) , ℒ_a(h_0) ⩽ℋ(h_0) + 2a ℐ(h_0) , ∑_k=1^n (1+κ)^-k⩽1+κ/κ , and ∑_k=1^n (1+κ)^-k^-ρ(n-k) ⩽ n max (1+κ)^-1/,^-ρ^n . Before providing the proof of <Ref>, let us recall the key steps of the proof of the similar result established in <cit.> for the continuous-time kinetic Langevin process, as the structure of the proof is similar. Denoting by (ĥ_t)_t⩾ 0 the analogous of (h_n)_n∈ in this continuous-time context, the first observation is that d/dtℋ(ĥ_t) = - ∫_^2d|_v ĥ_t|^2/ĥ_tμ . This entropy dissipation is not sufficient to conclude to an exponential decay, as it can be zero for non-constant densities ĥ_t (and thus in particular it cannot be upper bounded by -κℋ(ĥ_t) for some κ>0). A key observation is then that, when ^2 U is bounded, d/dt( ∫_^2d|_x ĥ_t + _v ĥ_t |^2/ĥ_tμ) ⩽ - c ∫_^2d|_x ĥ_t|^2/ĥ_tμ + C ∫_^2d|_v ĥ_t|^2/ĥ_tμ for some constants c,C>0. The last term can be controlled thanks to the entropy dissipation (<ref>). As a consequence, for a small enough, considering ℒ_a given by (<ref>), d/dtℒ_a(ĥ_t) ⩽ - a c ℐ(ĥ_t) , and the conclusion follows from ℒ_a(ĥ_t) ⩽ (max(C_ LS,1)+2a) ℐ(ĥ_t) , where we used that μ satisfies a log-Sobolev inequality with constant max(C_LS,1) as the tensor product of π and N(0,). The proof of Theorem <ref> is fundamentally based on the same ingredients. Instead of the time derivatives of (<ref>) and (<ref>), we have to estimate the discrete-time evolution of the two parts of the modified entropy along the two steps _δ and _η of the chain. The technical details of this analysis are postponed to <Ref>, where this evolution is shown to involve, on the one hand, the Jacobian matrix of the Verlet map and, on the other hand, some numerical error terms (due to the fact μ is not invariant by _δ). The study of these terms is performed respectively in <Ref>. Provided these postponed technical results, we can now prove <Ref>. Let ã >0 such that λ >0 and n ∈. In this proof, we consider the matrix given in (<ref>). We only give the proof for η>0: the case η=0 is obtained by seeing that all the computations are still valid if η^-1 is replaced by η̃^-1 for an arbitrary η̃∈(0,1), using that ∇_v _η = 0 in that case (as in the proof of <cit.> in the idealized case, to which we refer for details). The evolution of h_n is given for any bounded g by ∫_^2d h_n+1 g μ = ∫_^2d h_n g μ , in other words h_n+1= ^* h_n where, given a bounded operator in ^2(μ), ^* stands for its adjoint operator. We now identify the adjoint of the operators constituting . A simple calculation shows that the damping step is reversible in the probabilistic sense, namely _η^* = _η. Since the Verlet map Φ_δ^K satisfies |det∇Φ_δ^K | ≡ 1 and is and reversible in the physicist sense[This is also called R-reversibility in backward analysis] (see <cit.> and (<ref>) respectively), we get ∫_^2d h _δ^K g μ = ∫_^2d hΦ^-K_δ(z) g (z) μΦ^-K_δ(z) z = ∫_^2d g _δ^K ^*h μ with _δ^K ^* h = h ∘Φ^-K_δ^□ H , □ H = H- H∘Φ^-K_δ . In addition using again (<ref>), we get that Φ^-K_δ = (Φ_δ^-1)^∘ K with Φ_δ^-1(x,v) = x- δ v - δ^2/2 U(x) , v + δ/2 U(x) + U x- δ v - δ^2/2 U(x) . From these observations and since ^* = (_δ^K)^*_η^*, we consider the decomposition ℒ_(h_n+1) - ℒ_(h_n) = ℒ_ (^* h_n)-ℒ_(h_n) = _1+_2+_3+_4 , where _1 = ℋ (_δ^K)^*_η h_n -ℋ (_η h_n) , _2= ℋ (_η h_n) - ℋ ( h_n) _3 = ℐ_ (_δ^K)^*_η h_n, _4=- ℐ_(h_n) . We bound these four terms as follows. By Corollary <ref>, for any ε_1>0, _1 ⩽ 4 ε_1 K δℐ_η h_n + E_ε_1 W_n where for conciseness we write W_n = C_1 ^-ρ nν_0() + C_2 (recall the notations from <Ref>), and by Corollaries <ref> and <ref> E_ε_1 = ∑_j=0^K-16δ^7 j^2 ε_1 + ε_1 ∫_0^δ2 s^6 s + 1/30 ε_1δ^5 ⩽2 _1δ^7 K^3 + 2/7ε_1 K δ^7 + 1/30 ε_1 K δ^5 ⩽1/30 ( ε_1 + ε_1^-1) K δ^5 , where we used that Kδ⩽ 1/10. By Lemma <ref>, as η>0, _2≤ -1/2 (η^-2-1) ℐ__v(_η h_n) , _4≤-ℐ__η^-1(_η h_n) , where _v = [ 0 0; 0 1 ] , _η^-1 = [ 1 0; 0 1/η ] . Finally, using Lemma <ref> and Corollary <ref> and setting Ψ =(Φ_δ^-K)∘Φ_δ^K, for any ε_2>0, _3 ≤ (1+ε_2)ℐ_Ψ(_η h_n) + 2 ( 1+ε_2^-1) ||^2 K^2 δ^6 W_n ⩽ℐ_Ψ(_η h_n) + ε_2||^2 Ψ_∞ ^2 ℐ(_η h_n) +2 ( 1+ε_2^-1) ||^2 K^2 δ^6 W_n ⩽ℐ_Ψ(_η h_n) + 3 ε_2 ℐ(_η h_n) + 4 ( 1+ε_2^-1) K^2 δ^6 W_n , where we used <Ref>-Equation (<ref>) (with = Kδ⩽ 1/10) and that ||=√(2). Therefore, by combining the inequalities above, we get ℒ_(h_n+1) - ℒ_(h_n) ⩽ -2^-1(η^-2-1) ℐ__v(_η h_n) + ℐ_Ψ(_η h_n) - ℐ__η^-1(_η h_n) + 4 ε_1 K δ + 3 ε_2 ℐ_η h_n + ( ε_1 + ε_1^-1) K δ^5 /30 + 4 ( 1+ε_2^-1) K^2 δ^6 W_n . The sum of the terms in the first line of the right hand side is equal to - ℐ_^1/2(_η h_n)=- ∫_^2d_η h_n_η h_n/_η h_nμ, with, for z∈^2d, (z) = (2)^-1 (η^-2-1) _v^⊤_v - (Ψ(z))^⊤Ψ(z) + ( _η^-1)^⊤_η^-1 where Ψ is defined in (<ref>). The key of the proof is now to establish a positive lower bound on uniformly in z, for a small enough, which is the analogous in our context of (<ref>). We introduce the d× d block decompositions Ψ(z)=[ ψ_11 ψ_12; ψ_21 ψ_22 ] and (z)=[ s_11 s_12; s_21 s_22 ] , so that s_11 = -(ψ_11+ψ_21)^⊤(ψ_11+ψ_21), s_12= s_21 = η^-1 - (ψ_11+ψ_21)^⊤(ψ_12+ψ_22), s_22 = η^-2-1/2 + η^-2 - (ψ_22+ψ_12)^⊤(ψ_22+ψ_12) , Using Lemma <ref>, for any x∈^d, | (ψ_11+ψ_21)x| ⩽| (1-) - (^2/2) α x | + (2 ^3/3) |x|≤ 1 - + ^2 |x| | (ψ_22+ψ_12)x| ⩽| + β - (^2/2) ζ x | + (2 ^3/3) |x|≤ 1 + + ^2 |x| , and similarly | (ψ_11+ψ_21)x - x | ⩽ + ^2 |x| , | (ψ_22+ψ_12) x - x | ⩽ + ^2 |x| , from which we get for any x,y∈^d x s_11 x ⩾ 1 - 1 - + ^2 ^2 |x|^2 ⩾ 2 - 3 ^2 |x|^2 , y s_22 y ⩾η^-2-1/2 + η^-2 - 1 + + ^2 ^2 |y|^2 ⩾η^-2-1/2 + η^-2 - 1 -2 - 4 ^2 |y|^2 , x s_12 y = η^-1-1 x · y + x - (ψ_11+ψ_21)^⊤(ψ_12+ψ_22) y = η^-1-1 x · y + - (ψ_11+ψ_21) x (ψ_12+ψ_22) y + x- (ψ_12+ψ_22) y ⩾ - η^-1-1 + 2+ + ^2 + ^2 |x||y| ⩾ - η^-1-1 + 2 + 4^2 |x||y| . We have thus obtained that for any x,y∈^d and z∈^2d, [ x; y ]^⊤(z) [ x; y ]⩾λ_- |x|^2 +|y|^2, where λ_- is the smallest eigenvalue of the symmetric 2× 2 matrix [ m_1 m_2; m_2 m_3 ] := 1/[ 2 - 3 ^2 - η^-1-1 + 2 + 4^2; - η^-1-1 + 2 + 4^2 η^-2-1/2 + η^-2 - 1 -2 - 4 ^2 ] = _η^-1[ 2 - 3 - + η(2 + 4); - + η( 2 + 4 ) (1+η) 1/2+1 - η^2 (2 + 4) ]_η^-1 , recalling the notation = (1-η)/. Now, the sharp expressions stated in <Ref> follows from λ_- = m_1+m_3/2 - √(m_1+m_3/2 ^2 - m_1m_3 + m_2^2), which is positive under the condition stated in the second part of <Ref>. Alternatively, to get the simpler expressions (<ref>), we proceed as follows: using that η⩽ 1 and ⩽ 1/10, we bound m_1 ⩾17/10 , |m_2| ⩽ 1/η + 12/5 , m_3 ⩾ 1/η^2/2- 12/5, to get that [ x; y ]^⊤[ m_1 m_2; m_2 m_3 ][ x; y ] = m_1 |x|^2 + 2 m_2 |x||y| + m_3 |y|^2 ⩾ 1/2 m_1 |x|^2 + m_3 - 2m_2^2/m_1 |y|^2 ⩾ 17/20 |x|^2 + |y|^2 , i.e., λ_- ⩾ 17 /20, if we choose a such that /2- 12/5⩾17/20 + 20/17 + 12/5^2 , which holds in particular with the choice of a in (<ref>). In both cases, at this point, plugging this inequality in (<ref>), we have determined λ_->0 such that ℐ_^1/2(_η h_n) ⩽ - λ_- ℐ(_η h_n) . Combining this result with (<ref>) yields ℒ_(h_n+1) - ℒ_(h_n) ⩽ -ρ ' ℐ_η h_n + ( ε_1 +ε_1^-1) K δ^5/30+ 4 ( 1+ε_2^-1) K^2 δ^6 W_n , for ρ' = λ_- - 4 ε_1 - 3 ε_2 . Now, applying Lemma <ref> with Ψ^-1∘Φ_δ^-K (see Remark <ref>) we get for any ε_3>0 (1+ε_3) ℐ (_η h_n ) ≥ℐ_Ψ^-1∘Φ_δ^-K^* h_n - (1+ε_3^-1)Ψ^-1∘Φ_δ^-K_∞^2 Er_1^δ,K(_η h_n) . Thanks to Lemma <ref>, and more specifically to (<ref>), for any z,u∈^2d, |Ψ^-1(z) u|^2 ⩾ 1+ + ^2/2 + ^3/3^-2 |u|^2 ⩾ (1+5/2)^-1|u|^2 ⩾ 1 - 5 /2 |u|^2 , so that ℐ_Ψ^-1∘Φ_δ^-K^* h_n ⩾ 1 - 5 /2ℐ^* h_n . Using Lemma <ref> again, we have for any z,u∈^2d, |u| ⩽ |Ψ(z) u| + |Ψ(z) u - u| ⩽ |Ψ(z) u| + + ^22+ ^33 |u| , and thus Ψ^-1∘Φ_δ^-K_∞⩽ 2. Plugging these bounds in (<ref>) yields ℒ_(h_n+1) - ℒ_(h_n) ⩽ -ρ '/1+ε_3 1 - 5 /2ℐ h_n+1 - 4(1+ε_3^-1) Er_1^δ,K(_η h_n) + (ε_1 + ε_1^-1 K δ^5/30 + 4 ( 1+ε_2^-1) K^2 δ^6 W_n . Choosing ε_3 = /4, using Corollary <ref> to bound the remaining numerical error term and setting M = 8ρ' 1+ 4/ + (ε_1 + ε_1^-1)/30 + 4 ( 1+ε_2^-1) , we finally get that ℒ_(h_n+1) - ℒ_(h_n) ⩽ -ρ'(1-3) ℐ h_n+1 + δ^4 M W_n ⩽ -ρ'(1-3)/max(1,)+2ℒ_ h_n+1 + δ^4 M W_n , where we used the log-Sobolev inequality of μ (obtained by tensorization from the log-Sobolev inequalities satisfied by π and N(0,)), that ||^2= 2 and that ρ'(1-3)>0. Finally, to obtain (<ref>) with the simpler expressions of κ and M in (<ref>), using that λ_- ⩾ 17 /20 with the choice of ã = a in (<ref>), we use a ≤ 1/10 and choose ε_1 = a/10, ε_2 = /10 which implies that ρ' = a/2 since ρ' is given in (<ref>) and λ_- ≥ 17/20 by (<ref>) (to see that a ≤ 1/10, we can for instance distinguish the case γ≤ 3, for which a ≤γ/34, and the case γ≥ 3, for which a ≤ (γ+3)/[3(γ+3)^2]). In the expression of M we simply bound ρ' ⩽ 1/200, a⩽ 1/10, ⩽ 1/10. To get the sharper inequality (<ref>), we keep a free parameter ε∈(0,1) and choose ε_1 = ãελ_- 8, ε_2 = ελ_-6 which implies that ρ' = ãλ_- (1-ε) and M = 8ãλ_- + 4 + aλ_-/240 + 4/15aλ_- + 4 ( +6/ελ_-) , and using that ⩽ 1/10 allows to conclude. In the next sections, we derive the technical results that we used in the proof of <Ref>. In Section <ref>, we bound ℒ_(h_n+1) in terms of h_n, the Jacobian matrix of the (reverse) Verlet map Φ_δ^-K and some numerical error terms. The latters are studied respectively in Sections <ref> and <ref>. §.§ Intermediary lemmas In this section, we compute the evolution of the relative entropy and Fisher-like terms (<ref>) along the damping and Verlet steps. The velocity randomization step is the same as in <cit.>, where the following is established: Recalling from (<ref>) the notations _v = [ 0 0; 0 1 ] , _η = [ 1 0; 0 η ] , then, for all η∈(0,1) and all smooth positive h, ℋ (_η h )-ℋ (h )≤ -η^-2-1/2ℐ__v (_η h). and, for any matrix , ℐ_ _η^-1 ( h )≤ℐ_ (h). The first part is <cit.>. The second point follows from Jensen inequality and convexity of (t,x)↦x^2/t on ^*_+ ×^d by <cit.>, which imply ∫_^2d| h|^2 /hμ =∫_^2d_η| h|^2 /hμ⩾∫_^2d|_η h|^2 /_η hμ = ∫_^2d| _η^-1_η h|^2 /_η hμ . For all matrix , all smooth positive h and all ε>0, ℐ_ (_δ^K)^* h ≤ (1+ε) ℐ_Ψ (h )+(1+ε^-1)||^2 Er_1^δ,K(h) with Ψ =(Φ_δ^-K)∘Φ_δ^K and the error term Er_1^δ,K(h ) = ∫_^2d |□ H∘Φ_δ^K |^2 h μ . For all ε>0, ∫_^2d| (_δ^K)^* h|^2/(_δ^K)^* hμ = ∫_^2d| h∘Φ_δ^-K^□ H |^2/h∘Φ_δ^-K^□ Hμ = ∫_^2d| (Φ_δ^-K)( h)∘Φ_δ^-K- h ∘Φ_δ^-K□ H^□ H |^2/h∘Φ_δ^-K^□ Hμ = ∫_^2d| (Φ_δ^-K)( h)∘Φ_δ^-K - h ∘Φ_δ^-K□ H |^2/h∘Φ_δ^-K^□ Hμ = ∫_^2d| (Φ_δ^-K )∘Φ_δ^K h - h (□ H)∘Φ_δ^K |^2/h μ ⩽ (1+ε) ∫_^2d|Ψ h |^2/h μ + 1 + 1/ε||^2 ∫_^2d |□ H∘Φ_δ^K |^2 h μ , with Ψ = (Φ_δ^-K )∘Φ_δ^K, where we used the change of variable Φ_δ^-K in the penultimate line. If the constant matrix is replaced by a matrix field z↦(z), the proof also works and yields ℐ_ (_δ^K)^* h ≤ (1+ε) ℐ_(∘Φ_δ^K) Ψ (h )+(1+ε^-1)_∞^2 Er_1^δ,K(h) with _∞ = sup_z∈^2d|(z)|. For all smooth positive h and all ε>0, ℋ (_δ^K)^* h -ℋ (h )≤∑_j=0^K-1ε/2∫_0^δℐ (_δ^j _s^1)^* h ds + 1/2εEr_2 (_δ^j)^* h with the error term Er_2( h) = ∫_0^δ∫_^2d|∂_sΦ_s -F_H∘Φ_s|^2 h dμ ds , where F_H(x,v)=[ v; - U(x) ]. Due to the decomposition ℋ (_δ^K)^* h -ℋ (h ) = ∑_j=0^K-1ℋ (_δ^j+1)^* h -ℋ (_δ^j)^* h , we simply have to establish the result for K=1. For z∈^2d and t∈[0,δ], let z_t = Φ_t (z). For f_0 an initial (smooth positive) distribution, denote by f_t the law of z_t when z∼ f_0 and h_t=f_t/μ. By a change a variable we see that for any smooth function g on ^2d, ∫_^2d g(z) f_t(z) z = ∫_^2d g z_t f_0(z) z = ∫_^2d g(z) f_0 Φ_t^-1(z) z , i.e. f_t(z) = f_0 Φ_t^-1(z), and ∫_^2d g(z) ∂_t f_t(z) = ∂_t ∫_^2d g z_t f_0(z) z = ∫_^2d j_t(z)·∇ g(z) f_t(z) z with j_t = ∂_t Φ_t∘Φ_t^-1. Then, ∫_^2d f_δln h_δ - ∫_^2d f_0ln h_0 = ∫_0^δ∂_s ∫_^2d f_sln h_s s , and ∂_s ∫_^2d f_sln h_s = ∫_^2d∂_s h_s μ + ∫_^2dln h_s ∂_s f_s . The first term of the right hand side is equal to ∂_s ∫ h_s μ = ∂_s(1)=0. From the previous computations, ∂_s ∫_^2d f_sln h_s = ∫_^2d j_s·ln h_s f_s . Let F_H(x,v) = [ v; - U(x) ] , so that the Hamiltonian flow reads ż = F_H(z). The latter leaves the Hamiltonian H (hence μ) invariant, in other words ∫_^2d F_H ·ln h_s f_s = ∫_^2d F_H · h_s μ = - ∫_^2d h_s F_H ·μ = 0 . Thus, we can add this term to the previous equality to get, for any ε>0, ∂_s ∫_^2d f_sln h_s = ∫_^2d j_s(z) - F_H(z)·ln h_s(z) f_s(z) z ⩽ε/2∫_^2d| j_s(z) - F_H(z) |^2 f_s(z) z + 1/2ε∫_^2d|ln h_s(z)|^2 f_s(z) , which gives the announced result once integrated over s∈[0,δ]. At this stage, gathering Lemmas <ref>, <ref> and <ref> yields a bound on ℒ_a(h_n+1) involving h_n, the matrix field Ψ of Lemma <ref> and the numerical error terms of Lemmas <ref> and <ref>. It remains to understand these parts. We give some estimates on Ψ in Section <ref> and analyse the numerical errors in Section <ref>. §.§ Jacobian matrix of the Verlet integrator As mentioned above, this section focuses on the matrix field Ψ = (Φ_δ^-K)∘Φ^K_δ appearing in Lemma <ref>. The following result is the analogous for the Verlet scheme of the first part of <cit.> for the Hamiltonian dynamics. Under <Ref> with L=1, for all z∈^2d, there exist d× d matrices α,β,ζ with norms less than 1 such that, recalling the notation =Kδ, | Ψ (z) - [ - ^2/2α β; - - ^2/2ζ ]|⩽1/3^3 . In particular, |Ψ(z)| ⩽ 1+ + 1/2^2 + 1/3^3 . Denote by ℳ_1 the set of d× d matrices with operator norm bounded by 1. Fix z∈^2d. For k⩾ 1, set z_k = Φ_δ^-1(z_k-1), with z_0=Φ^K_δ(z). For k⩾ 0, set Q_k = ^2 U (x_k ) (where z_k = (x_k,v_k)). By assumption, Q_k∈ℳ_1 for all k⩾ 0. Recall that for F:^d→^d we use the convention F = (∂_i F_j)_1⩽ i,j ⩽ d where i stands for the row and j for the column. Hence, from (<ref>), for all k⩾ 0, (Φ_δ^-1)(z_k) = [ - δ^2/2Q_k δ/2 Q_k + - δ^2/2 Q_k) Q_k+1; - δ - δ^2/2Q_k+1 ] . In particular, | (Φ_δ^-1)(z_k) - [ - δ^2/2Q_k δ/2 Q_k + Q_k+1; - δ - δ^2/2Q_k+1 ]| ⩽ δ^3/4 . Let us determine for all k∈ 0,K, by induction, a constant C_k⩾ 0 and matrices α_k,β_k,ζ_k ∈ℳ_1 such that |Φ_δ^-k(z_0) - [ - (δ k)^2/2α_k δ k β_k; -δ k -(δ k)^2/2ζ_k ]| ⩽ C_k . For k=1, this is given by (<ref>) (applied at k=0) with C_1=δ^3 /4 (also the case k=0 is trivial with C_0=0). Suppose that the result is true for some k⩾ 1. In particular, using that δ k ⩽ 1/10, |Φ^-k_δ(z_0) | ⩽ | [ - (δ k)^2/2α_k 0; 0 -(δ k)^2/2ζ_k ]| + | [ 0 δ kβ_k; -δ k 0 ]|+C_k ⩽ 1 + 21/20δ k +C_k . Then, Φ^-k-1_δ (z_0) = Φ_δ^-1(z_k) Φ^-k_δ(z_0). We decompose [ - δ^2/2Q_k δ Q_k + Q_k+1; - δ - δ^2/2Q_k+1 ][ - (δ k)^2/2α_k δ k β_k; -δ k -(δ k)^2/2ζ_k ] = [ - δ^2 (k+1)^2/2α_k+1 δ (k+1) β_k+1; -δ (k+1) -δ^2 (k+1)^2/2ζ_k+1 ] + ℜ with α_k+1= 1/(k+1)^2 Q_k + k^2 α_k + k(Q_k+Q_k+1) β_k+1= 1/2(k+1) Q_k+Q_k+1 + 2kβ_k ζ_k+1= 1/(k+1)^2 2kβ_k + Q_k+1 + k^2 ζ_k which are all three in ℳ_1, and a remainder ℜ=[ δ^4 k^2/4 Q_kα_k -δ^3 k/4 2 Q_kβ_k+k(Q_k+Q_k+1) ζ_k; δ^3k/2 Q_k+1+ k α_k δ^4 k^2/4 Q_k+1ζ_k ] . Using that δ⩽ 1/10, |ℜ| = δ^4 k^2/4|[ Q_kα_k 0; 0 Q_k+1ζ_k ]| + δ^3 k/2|[ 0 - Q_kβ_k-k/2(Q_k+Q_k+1) ζ_k; Q_k+1- k α_k 0 ]| ⩽ δ^4 k^2/4 + δ^3 k/2(k+1) ⩽ 21/40δ^3 k(k+1) . We get that (⋆) :=|Φ_δ^-k-1(z_0) - [ - δ^2 (k+1)^2/2α_k+1 δ (k+1) β_k+1; -δ (k+1) -δ^2 (k+1)^2/2ζ_k+1 ]| ⩽ |ℜ| + | (Φ_δ^-1)(z_k) - [ - δ^2/2Q_k δ/2 Q_k + Q_k+1; - δ - δ^2/2Q_k+1 ]Φ_δ^-k(z_0)| + | [ - δ^2/2Q_k δ/2 Q_k + Q_k+1; - δ - δ^2/2Q_k+1 ]Φ_δ^-k(z_0)- [ - (δ k)^2/2α_k δ k β_k; -δ k -(δ k)^2/2ζ_k ]| ⩽ 21/40δ^3 k(k+1) + 1/4δ^3 |Φ_δ^-k(z)| + 1+ 21/20δ C_k where we used (<ref>) to bound |ℜ|, (<ref>) for the second term and the induction hypothesis (<ref>) for the third one (with the same computation as in (<ref>) to get the 1+21/20δ). Finally, using (<ref>), we have finally concluded the proof by induction of (<ref>) with (⋆) ⩽ C_k+1 := 21/40δ^3 k(k+1) + 1/4δ^3 1 + 21/20δ k +C_k + 1+ 21/20δ C_k ⩽13/16δ^3 (k+1)^2 + 1+ 11/10δ C_k . From this, C_K ⩽ 13/16δ^3 ∑_k=0^K-1 (k+1)^2 1+ 11/10δ^K-1-k ⩽ 13/16× 3 (δ K)^3 ^11 δ K/10 ⩽ 1/3 (δ K)^3 . In particular, thanks to the bound on |Ψ| of Lemma <ref> and to Lemma <ref> we can bound the Fisher term appearing in Lemma <ref> to get the following somewhat more explicit estimate: For any ε >0, ℋ (_δ^K)^* h -ℋ (h) ⩽ 4 ε K δℐ h + ∑_j=0^K-1 3δεEr_1^δ,j h + ε∫_0^δEr_1^s,1 (_δ^j)^* h s + 1/2εEr_2 (_δ^j)^* h . For j∈ 0,K-1 and s∈[0,δ], writing Ψ_s = Φ_s^-1∘Φ_s and Ψ_j =Φ_δ^-j∘Φ_δ^j, applying Lemma <ref> (with A=I_2d and ε=1) and using the bound (<ref>) (with s⩽ Kδ⩽ 1/10) on |Ψ_s| and |Ψ_j|, ℐ (_δ^j _s^1)^* h ⩽ 2 ℐ_Ψ_s (_δ^j)^* h +2 Er_1^s,1 (_δ^j )^* h ⩽ 2 |Ψ_s|^2ℐ (_δ^j )^* h +2 Er_1^s,1 (_δ^j)^* h ⩽ 3 ℐ (_δ^j )^* h +2 Er_1^s,1 (_δ^j)^* h ⩽ 6 ℐ_Ψ_j h +6 Er_1^δ,j h +2 Er_1^s,1 (_δ^j)^* h ⩽ 8 ℐ h +6 Er_1^δ,j h +2 Er_1^s,1 (_δ^j)^* h . Plugging this bound in the result of Lemma <ref> concludes. §.§ Numerical error The goal of this section is to bound the numerical error appearing in Lemmas <ref> and <ref>. §.§.§ Error 1 Under <Ref> with L=1 and <Ref>, for all z∈^d, |□ H(z)|^ 2 ⩽2δ^6 K ∑_k=0^K-1[ N_4^6(v_k) + N_4^6 ( U(x_k)) + N_3^4(v_k)+ N_3^4( U(x_k)) + |v_k|^2 + | U(x_k)|^2 ] , where (x_k,v_k) = Φ_δ^- k(z) for k∈ 0,K. Decomposing □ H = H - H∘Φ_δ^-K= ∑_k=1^K H ∘Φ_δ^-k+1 -H ∘Φ_δ^-k , so that |□ H|^2 ⩽ K ∑_k=1^K | H ∘Φ_δ^-k+1-H ∘Φ_δ^-k|^2 , we focus from now on on the case K=1. Recall H(x,v)=U(x)+|v|^2/2 and write (x_1,v_1) = Φ_δ^-1(x,v) = x- δ v - δ^2/2 U(x) , v + δ/2 U(x) + U x- δ v - δ^2/2 U(x) . Hence, starting with the velocity contribution, _v H - H∘Φ_δ^-1(x,v) = v - -δ U(x_1) + I- δ^2/2^2 U(x_1) v_1 = δ/2 U(x_1) - U(x) + δ^2/2^2 U(x_1)v + δ^3/4^2 U(x_1) U(x)+ U(x_1) = δ/2 U(x_1) - U(x) +^2 U(x_1)(x-x_1) + δ^3/4^2 U(x_1) U(x_1) . Notice that (<ref>) implies that | U(x+y) - U(x) - ^2 U(x) y| = |∫_0^1 ^2 U (x+sy) - ^2 U (x) y s | ⩽1/2_3^2(y) . Using this and that ^2 U_∞⩽ L = 1, we have thus obtained that |_v H - H∘Φ_δ^-1(x,v)| ⩽ δ/4_3^2(x-x_1) + δ^3/4| U(x_1) | ⩽ δ^3/2_3^2(v) + δ^5/8_3^2( U(x)) + δ^3/4 | U(x) | + |x_1-x| ⩽ δ^3/2_3^2(v) + δ^5/8_3^2( U(x)) + δ^4/4|v| + δ^3/4 + δ^5/8 | U(x) | . Using that δ⩽ 1/10 and Cauchy-Schwarz inequality, |_v H - H∘Φ_δ^-1(x,v)|^2 ⩽ δ^6 1/2_3^2(v) + 1/800_3^2( U(x)) + 1/40|v| + 201/800 | U(x) | ^2 ⩽ δ^6/3_3^4(v) + _3^4( U(x)) + |v|^2 + | U(x) |^2 . Turning to the position contribution, _x H - H∘Φ_δ^-1(x,v) = U(x) - I - δ^2/2^2 U(x) U(x_1) +δ/2^2 U(x) + I - δ^2/2^2 U(x)^2 U(x_1) v_1 = (∗) + (∗∗) + (∗∗∗) with, organizing terms depending on their orders in δ, (∗) = U(x) - U(x_1) - 1/2^2 U(x)+^2 U(x_1) (x-x_1) (∗∗) = δ^2/4^2 U(x)-^2 U(x_1) U(x_1) (∗∗∗) = δ^3/4^2 U(x) ^2 U(x_1) v_1 . Using the bound (<ref>) on the fourth derivative of U and that δ⩽ 1/10, |(∗)| ⩽N_4^3(x-x_1) ⩽δ^3N_4(v) + 10^-3N_4 ( U(x))^3 ⩽δ^3(1+10^-3)^2 N_4^3(v) + 10^-3N_4^3 ( U(x)) . From the bound (<ref>) on the third derivative of U and that ^2 U_∞⩽ L = 1 |(∗∗)| ⩽ δ^2/4|(^2 U(x) - ^2 U(x_1)) U(x) | + δ^2/2 | U(x_1)- U(x)| ⩽ δ^2/4N_3(x-x_1)N_3( U(x)) + δ^2/2|x_1-x| ⩽ δ^3/8N_3^2(v)+ δ^3+δ^4/8N_3^2( U(x)) + δ^3/2|v| + δ^4/4| U(x)| . Finally, | (∗∗∗)| ⩽δ^3/4|v_1| ⩽δ^3/4 |v| + δ^4/4| U(x)| + δ^4/4|x_1-x| ⩽δ^3 + δ^5/4 |v| + δ^4 + δ^6/2/4| U(x)| . As a conclusion, using that δ⩽ 1/10 and Cauchy-Schwarz inequality, |_x H - H∘Φ_δ^-1(x,v)|^2 ⩽ δ^6 [ (1+10^-3)^2 N_4^3(v) + 10^-3N_4^3 ( U(x)) + 1/8N_3^2(v) + 11/80N_3^2( U(x)) + 301/400|v| + 2+5× 10^-3/40| U(x)|]^2 ⩽ 5/3δ^6 [ N_4^6(v) + N_4^6 ( U(x)) + N_3^4(v) + N_3^4( U(x)) + |v|^2 + | U(x)|^2] , which, combined with (<ref>), concludes. Integrating this bound with respect to hμ immediately yields the following (recall the definitions of the Error 1 in Lemma <ref> and of the function 𝐌 in (<ref>)): Under <Ref> with L=1 and <Ref>, for any smooth positive relative density h, Er_1^δ,K(h) ⩽2δ^6 K ∑_k=0^K-1∫_^2d𝐌∘Φ_δ^k h μ . In particular, assuming moreover <Ref>, and writing h_n = (^n)^*h_0 for n∈, Er_1^δ,K(_η h_n) ⩽2δ^6 K^2 C_1 (1-ρ Kδ )^n ν_0() + C_2, Er_1^δ,1 ((_δ^j)^*_η h_n) ⩽2δ^6 C_1 (1-ρ Kδ )^n ν_0() + C_2, for all j∈ 0,K. If in <Ref> we only assume (<ref>) but not (<ref>), from (<ref>), using that U(x+y) - U(x) = ∫_0^1 ^2 U(x+sy) y s, we deduce that for all x,y∈^d, U(x+y) - U(x) - 1/2^2 U(x)+^2 U(x+y) y ⩽_3^2(y) . We can thus use this inequality instead of (<ref>) in Lemma <ref> (which is the only place where (<ref>) intervenes) and replace _4^6 by _3^4 in the definition (<ref>) of 𝐌. Following then the rest of the proof, we see that Theorem <ref> still holds in this case, but with δ^4 replaced by δ^2. §.§.§ Error 2 Here, we only have to consider the case K=1. Under <Ref> with L=1 and <Ref>, for all z∈^d and s∈[0,δ], recalling F_H(x,v)=[ v; -∇ U(x) ], |∂_s Φ_s(z) - F_H Φ_s(z) |^2 ⩽ s^4 1/3|v|^2 + 1/7| U(x)|^2 + 1/7N_3^4(v) + 1/140N_3^4( U(x)) . Decomposing z_s= Φ_s(x,v) as z_s = [ x_s; v_s ] = [ x+sv - s^2/2 U(x); v - s/2 U(x) + U x+sv - s^2/2 U(x) ] , we compute (∂_s Φ_s)(z) - F_H(z_s) = [ v - s U(x); -1/2 U(x) + U(x_s) -s/2^2 U(x_s) x_s' ] - [ v_s; - U(x_s) ] = 1/2[ s U(x_s)- U(x); U(x_s)- U(x) - s ^2 U(x_s) v - s U(x) ] . As a consequence, using (<ref>) and that ^2U_∞⩽ 1, |(∂_s Φ_s)(z) - F_H(z_s)|^2 ⩽ s^2/4|x_s-x|^2 + 1/2 | U(x_s)- U(x) - ^2 U(x_s) (x_s-x)|^2 + s^2/2|x_s-x - s(v-s U(x))|^2 ⩽ s^2/4|x_s-x|^2 + 1/8N_3^4(x_s-x) + s^6/8| U(x)|^2 . Using that s⩽δ⩽ 1/10, we bound N_3^4(x_s-x) ⩽ s N_3(v) + s/20N_3( U(x))^4 ⩽21^3/20^3 s^4 N_3^4(v) + 1/20N_3^4( U(x)) and similarly |x_s-x|^2 ⩽ 21/20 s^2 |v|^2 + | U(x)|^2/20, which concludes. Again, integrating the previous bound immediately yields (recall the definitions of the Error 2 in Lemma <ref> and of the function 𝐌 in (<ref>)): Under <Ref> with L=1 and <Ref>, for any smooth positive relative density h, Er_2(h) ⩽δ^5/15∫_^2d𝐌 h dμ . In particular, assuming moreover <Ref>, and writing h_n = (^n)^* for n∈ then, for any j∈ 0,K-1, Er_2((_δ^j)^* _η h_n) ⩽δ^5/15 C_1 (1-ρ Kδ )^n ν_0() + C_2 . Indeed: Er_2(δ,h) = ∫_0^δ∫_^2d|∂_s Φ_s - F_H ∘Φ_s |^2 h dμ s ⩽ 1/3∫_0^δ∫_^2ds^4 𝐌 h dμ s . §.§ Acknowledgments The works of P.M. and G.S. are supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (project EMC2, grant agreement No 810367); and by the Agence Nationale de la Recherche, under grant SWIDIMS (ANR-20-CE40-0022) for P.M., and ANR-19-CE40-0010-01 (QuAMProcs) and ANR-21-CE40-0006 (SINEQ) for G.S. A.O.D. would like to thank the Isaac Newton Institute for Mathematical Sci- ences for support and hospitality during the programme The mathematical and statistical foundation of future data- driven engineering when work on this paper was undertaken. This work was supported by: EPSRC grant number EP/R014604/1. plain § REGULARITY OF U First, | ^2 U(x+y) - ^2 U(x) z |^2 = ∑_i=1^d_0| ∑_j=1^d_0^2 _i,j U(x+y) - ^2_i,j U(x) z_j|^2 ⩽ (1+ϵ) ∑_i=1^^2 U^(q)(x_i+y_i) - ^2 U^(q)(x_i) z_i^2 + ϵ +ϵ^2/^2 ∑_i=1^d_0| ∑_j=1^d_0^2 W^(q)(x_i+y_i-x_j-y_j) - ^2 W^(q)(x_i-x_j) (z_i-z_j)|^2 ⩽ (1+ϵ) ^3 U^(q)_∞^2 ∑_i=1^ |y_i|^2|z_i|^2 + ϵ + ϵ^2/^3 W^(q)_∞^2 ∑_i=1^d_0∑_j=1^d_0 |y_i-y_j|^2 |z_i-z_j|^2 ⩽ (1+ϵ) (^3 U^(q)_∞ ^2 + 16 (ϵ+ϵ^2) ^3 W^(q)_∞^2) √(∑_i=1^ |y_i|^4 ∑_j=1^|z_j|^4) , where we used the Cauchy-Schwarz inequality to conclude the proof of (<ref>). Following similar computations, U(x+y) - U(x) - 1/2^2 U(x)+^2 U(x+y) y^2 = | ∫_0^1 ^2 U(x+sy) s - 1/2^2 U(x)+^2 U(x+y) y|^2 = ∑_i=1^| ∑_j=1^∫_0^1 _i,j^2 U(x+sy) s - 1/2_i,j^2 U(x)+^2 _i,j U(x+y) y_j|^2 ⩽ (1+ϵ) ∑_i=1^| ∫_0^1 ^2 U^(q)(x_i+sy_i) s - 1/2^2 U^(q)(x_i)+^2 U^(q)(x_i+y_i) y_i|^2 + ϵ +ϵ^2/∑_i,j=1^d_0| ( ∫_0^1 ^2 W^(q)(x_i-x_j+s(y_i-y_j)) s - 1/2^2 W^(q)(x_i-x_j)+^2 W^(q)(x_i+y_i-x_j-y_j)) (y_i-y_j)|^2 . Using that, for all f∈𝒞^2([0,1],), integrating twice by parts, ∫_0^1 f(s) s - f(0)+f(1)/2 = -1/2∫_0^1 s(1-s) f”(s) s , fixing i ∈ 1,, x_i,y_i∈^q and writing _ℓ the ℓ-th canonical vector of ^q, | _ℓ·∫_0^1 ^2 U^(q)(x_i+sy_i) s - 1/2^2 U^(q)(x_i)+^2 U^(q)(x_i+y_i) y_i| = 1/2| ∫_0^1 s(1-s) ^4 U^(q)(x_i+s y_i) {_ℓ,y_i,y_i,y_i} s| ⩽ 1/12^4 U^(q)_∞ |y_i|^3 . The terms involving W^(q) is treated similarly and we get U(x+y) - U(x) - 1/2^2 U(x)+^2 U(x+y) y^2 ⩽ (1+ϵ) ∑_i=1^q/144^4 U^(q)_∞^2 |y_i|^6 + ϵ +ϵ^2/∑_i,j=1^d_0q/144^4 W^(q)_∞^2 |y_i-y_j|^6 ⩽ 1+ϵ/144^4 U^(q)_∞^2 + 4(ϵ +ϵ^2) /9 ^4 W^(q)_∞^2 q ∑_i=1^ |y_i|^6 , which concludes the proof of (<ref>). Finally, to bound 𝐌, for ℓ∈{2,4,6}, using that U^(q)(0)=0= W^(q)(0), we bound |_x_i U(x) |^ℓ = | U^(q)(x_i) + 2ϵ/∑_j=1^ W^(q)(x_i-x_j) |^ℓ ⩽ 2^ℓ-1 | U^(q)(x_i)|^ℓ + 2^2ℓ-1ϵ^ℓ/∑_j=1^| W^(q)(x_i-x_j) |^ℓ ⩽ 2^ℓ-1^2 U^(q)_∞^ℓ |x_i|^ℓ + 2^2ℓ-1ϵ^ℓ/^2 W^(q)_∞^ℓ∑_j=1^| x_i-x_j |^ℓ . Bounding |x_i-x_j|^ℓ⩽ 2^ℓ-1(|x_i|^ℓ + |x_j|^ℓ) in the last term, we end up with ∑_i=1^ |_x_i U(x) |^ℓ⩽ 2^ℓ-1^2 U^(q)_∞^ℓ + 2^3ℓϵ^ℓ^2 W^(q)_∞^ℓ∑_i=1^ |x_i|^ℓ . § DRIFT In all this section, which is devoted to the proof of Theorem <ref>, <Ref> is enforced, we write L_1 = max(^2 U^(q)_∞,^2 W^(q)_∞) and, in fact, using a rescaling as in Section <ref>, we assume without loss of generality that L_1 ⩽ 1. Let us state a few consequence of <Ref> which will be useful in the rest of this section. First, the drift condition (<ref>) implies that for any x_1 ∈^q, ∇(x_1)^2 ≥ (/2) (x_1^2/2 - ). In addition, using that U^(q) is Lipschitz and the Cauchy-Schwarz inequality easily implies that ≤ L_1 ⩽ 1. Finally, since U^(q)(0)= W^(q)(0)=0, we get that for any x_1∈^q ∇(x_1)≤x_1, ∇(x_1)≤x_1, U^(q)(x_1) ⩽ |x_1|^2/2 . Our analysis is based on the following Lyapunov functions. Under <Ref>, for = (K,δ,η) ∈×_+^* ×0,1 and ℓ∈, define : ^2d→ for any x =(x_1,…,x_),v =(v_1,…,v_) ∈^d, (x,v) = ∑_i=1^(x,v)^ℓ, (x,v) = (γ_0^2/2)x_i^2 +v_i^2 + ηγ_0x_iv_i + (x_i) , where γ_0 = (1-η)/ (δ K) . Note that since η∈0,1, using Young's inequality 𝐚𝐛≤ (ε/2)𝐚^2 + (1/(2ε)) 𝐛^2 with ε = 2/3, we get (1/6)[x^2γ_0^2 + v^2] + ∑_i=1^(x_i) ≤(x,v) ≤ (3/2)[x^2γ_0^2 + v^2] + ∑_i=1^(x_i) . The main result in this section is the following. Assume <Ref> holds with L_1=1 and let ,, >0. Then, for any ℓ∈, ϵ >0, = (K,δ,η), δ >0, K∈, η∈0, satisfying ϵ≤_1 ∧ϵ_2 , δ≤δ_1 ∧δ_2 ∧ 1/[7× 8] ∧, K δ≤T_1 ∧ T_2 ∧ T_3 ∧, for any x,v∈^d and k ∈{0,…,K}, _^n _η_δ^k (x,v) ≤ C_1,ℓ^ℓ (1-ρ_ Kδ)^ℓ n(x,v) + (C_2,ℓ q^ℓ + C_3,ℓ) sup_s ∈ [0,]s/1-(1-ρ_s/8)^ℓ. where ϵ_1,ϵ_2,T_1,T_2,T_3,δ_1,δ_2 are defined in (<ref>)-(<ref>)-(<ref>)-(<ref>)-(<ref>), ρ_ = 2^5[γ_0∧{ηγ_0 +γ_0^2K δ/2}(2^6 3)^-1] , and C_1,ℓ,C_2,ℓ,C_3,ℓ are given in (<ref>) and do not depend on ,q and only depend on δ,K,η through γ_0 and ρ_. The rest of Section <ref> is devoted to the proof of this result. Before proceeding with this proof, let us notice that Theorem <ref> implies Theorem <ref>. Indeed, the rescaling needed to get that L_1=1 does not affect Theorem <ref> and, as can be checked on their expressions, the constants ϵ_1,ϵ_2,T_1,T_2,T_3,δ_1,δ_2,C_1,ℓ,C_2,ℓ,C_3,ℓ appearing in Theorem <ref> depends only on ,,T,δ,γ_0 and not on q,,ϵ (notice here that we can simply choose η=1). Moreover, thanks to (<ref>) and the fact 0 ⩽ U^(q)(x_1) ⩽ |x_1|^2/2, the function defined in (<ref>) is equivalent to _^(3), more precisely there exists A>0 which depends only on γ_0 such that, for all (x,v)∈^2d, A^-1_^(3)(x,v) ⩽(x,v) ⩽ A _^(3)(x,v) . As a consequence, the drift inequality (<ref>) immediately implies a similar inequality but with in place of _^(3), which concludes the proof of Theorem <ref>. We start the proof of <Ref> with the following technical lemma. Consider for any = (,,̱,̧_̱0,,) ∈_+^5, i ∈{1,…,}, x = (x_1,…,x_q), v = (v_1,…,v_q) ∈ (x,v) = [2]x_i + [2]v_i + x_iv_i + 2_̱0 (x_i) + ϵ/∑_j=1^x_i-x_j^2+v_i-v_j^2 +. Assume that <Ref> holds. Suppose that ϵ≤/4. Then, for any x,v ∈, δ∈0,/40, i∈∬1 and = (,,̱,̧b_0,,)∈_+^5, b_0 ≤ b, _δ(x,v) ≤[][i](x,v) , where = (,,,_̱0,,) = a(1-δ^2/2) + 10 b δ^2 - δ c/8 + 48 ϵ e δ + δb-b_0 = 2 δ^2 + (1+ϵδ + 17 δ^2)+̱δ(1+7δ) +̧ϵδ + δb-b_0 = 2 δ + , = + 17 δ(δ ++̱+̧) = +δ^2 + _̱0 δ^2 /4 + δ c . Note first that <Ref> implies that ϵ≤/4 ≤ 1/4. Let x,v ∈, i ∈∬1, δ≤/40 and = (a,b,c,b_0,e,f). Define for j ∈∬1, ψ_j: ^d →^d for any x̃ = x̃∈ by ψ_j(x̃) = ∇(x̃_j) + ϵ/∑_j'=1^∇(x̃_j-x̃_j') . With this notation, by definition of _δ, we have _δ(x,v) = (y,p), where y = y, p=p and y_j = x_j-δ^2/2ψ_j(x) + δ v_j , p_j = v_j -δ/2ψ_j(x) + ψ_j(y), j ∈∬1. First note that by the Cauchy-Schwarz inequality and (<ref>), we have for any j ∈∬1, y_j-x_j^2 ≤ 2 δ^4 ∇(x_j)^2 + δ^4 ϵ^2/∑_j'=1^∇(x_j-x_j')^2 + δ^2 v^2 ≤ 2 δ^4 x_j^2 + δ^4 ϵ^2/∑_j'=1^x_j-x_j'^2+ δ^2 v_j^2 , and then ^-1∑_j=1^y_j-x_j^2 ≤ 4 δ^2 ( δ^2 x_i^2 + v_i^2) + 4 δ^2 ∑_j=1^v_i-v_j^2 + 8 δ^4 (1+ϵ^2 )^-1∑_j=1^x_i-x_j^2 . By <Ref> and Jensen inequality, for any x̃ = x̃, ỹ = ỹ∈ and j ∈∬1, ψ_j(x̃) - ψ_j(ỹ)^2 ⩽ 2 x̃_j-ỹ_j^2 + 2 ϵ^2/∑_j'=1^x̃_j-x̃_j'-(ỹ_j-ỹ_j')^2 ⩽(2 +4 ϵ^2)x̃_j-ỹ_j^2 +4 ϵ^2 1/∑_j'=1^x̃_j'-ỹ_j'^2 , ψ_j(x̃)^2 ⩽ 2 x̃_j^2 +2 ϵ^2/∑_j'=1^x̃_j-x̃_j'^2 . As a result, using (<ref>) and ϵ≤ 1, we obtain ψ_i(x) - ψ_i(y)^2 ≤ 4 (1 +2 ϵ^2) δ^4 x_i ^2 + δ^4 ϵ^2/∑_j=1^x_i-x_j^2+ δ^2 v_i^2 +8 ϵ^2 ^-1∑_j=1^δ^4 x_j ^2 + δ^4 ϵ^2/∑_j'=1^x_j-x_j'^2+ δ^2 v_j^2 ≤ 4 (1 +26 ϵ^2) δ^4 x_i ^2 + δ^4 ϵ^2/∑_j=1^x_i-x_j^2+ δ^2 v_i^2 +16 ϵ^2 δ^2 ^-1∑_j=1^v_i-v_j^2 . In addition, using the Cauchy-Schwarz inequality, 2δ^2 ϵ≤ 1 and δ∨ϵ≤/4 ≤ 1/4, we get y_i^2 = x_i^2 + 2 x_i-δ^2/2ψ_i(x) + δ v_i + y_i-x_i^2 ≤x_i^2- δ^2x_i∇(x_i) + δ^2ϵx_i^2 + δ^2 ϵ/∑_j=1^x_i-x_j^2 +2 δx_iv_i + 2 δ^4 x_i ^2 + δ^4 ϵ^2/∑_j=1^x_i-x_j^2 + δ^2 v^2 ≤ (1+δ^2 ϵ + 2 δ^4 )x_i^2+2 δx_iv_i +2δ^2 v_i^2 - δ^2x_i∇(x_i) + δ^2 ϵ^-1(1 + 2 δ^2 ϵ)∑_j=1^x_i-x_j^2 ≤ (1+δ^2 ϵ + 2 δ^4 )x_i^2+2 δx_iv_i +2δ^2 v_i^2 - δ^2x_i∇(x_i) + 2 δ^2 ϵ^-1∑_j=1^x_i-x_j^2 ≤ (1-δ^2 +δ^2 ϵ + 2 δ^4 )x_i^2+2 δx_iv_i +2δ^2 v_i^2 + δ^2 + 2 δ^2 ϵ^-1∑_j=1^x_i-x_j^2 ≤ (1-δ^2 /2 )x_i^2+2 δx_iv_i +2δ^2 v_i^2 + δ^2 + 2 δ^2 ϵ^-1∑_j=1^x_i-x_j^2 . Similarly, by (<ref>)-(<ref>) and δ≤ 1, we have p_i^2 = v_i^2 - δv_iψ_i(x) + ψ_i(y) + δ^2/4ψ_i(x) + ψ_i(y) ^2 ⩽v_i^2-2 δ⟨ v_i, ψ_i(x)⟩ +δ^2 /2v_i^2 +1/2 ψ_i(x)-ψ_i(y)^2 +δ^2/2{4ψ_i(x)^2+ψ_i(x)-ψ_i(y)^2} ⩽v_i^2-2 δ⟨ v_i, ∇(x_i)⟩ +(δϵ+ δ^2 /2)v_i^2 +δ^2/2 + 1/2 ψ_i(x)-ψ_i(y)^2 + 4δ^2 x_i^2 +ϵ^2/∑_j=1^x_i-x_j^2 + δϵ^-1∑_j=1^x_i-x_j^2 ≤ (1+δ(ϵ+5(1+26ϵ^2)δ))v_i^2 -2δv_i∇(x_i) +4δ^2(1+δ^2(1+26ϵ^2))x_i^2 + δϵ^-1 (1+4δϵ +4(1+26ϵ^2)ϵδ^3) ∑_j=1^x_i-x_j^2+ 8 (1+δ^2) ϵ^2 δ^2 ^-1∑_j=1^v_i-v_j^2 ≤ (1+δ(ϵ+5(1+26ϵ^2)δ))v_i^2 -2δv_i∇(x_i) +8δ^2(1+13ϵ^2δ^2 )x_i^2 + δϵ^-1 (1+8δϵ + 4 · 26 δ^3 ϵ^3) ∑_j=1^x_i-x_j^2+ 16 ϵ^2 δ^2 ^-1∑_j=1^v_i-v_j^2 , and ⟨ y_i, p_i⟩ =⟨ x_i-δ^2/2ψ_i(x)+δ v_i, v_i-δ/2{ψ_i(x)+ψ_i(y)}⟩ =⟨ x_i, v_i⟩-δ/2⟨ x_i, ψ_i(x)+ψ_i(y)⟩ -δ^2/2⟨ v_i, ψ_i(x)⟩+δv_i^2 +δ^3/4⟨ψ_i(x), ψ_i(x)+ψ_i(y)⟩ -δ^2/2⟨ v_i, ψ_i(x)+ψ_i(y)⟩ ⩽⟨ x_i, v_i⟩-δ⟨ x_i, ψ_i(x)⟩ +δ/2x_iψ_i(x)-ψ_i(y) -δ^2/2⟨ v_i, ψ_i(x)⟩+δv_i^2 +δ^3/2ψ_i(x)^2+δ^3/4ψ_i(x)ψ_i(x)-ψ_i(y) -δ^2 ⟨ v_i, ψ_i(x)⟩+δ^2/2v_iψ_i(x)-ψ_i(y) ≤⟨ x_i, v_i⟩-δ⟨ x_i, ψ_i(x)⟩ +1/2(δ^2 /2x_i^2+1/2 ψ_i(x)-ψ_i(y)^2) -3/2δ^2⟨ v_i, ψ_i(x)⟩+(δ+δ^2/4) v_i^2 +4^-1δ^3ψ_i(x)^2 +4^-1δ^2(1+δ)ψ_i(x)-ψ_i(y)^2 ≤⟨ x_i, v_i⟩-δ⟨ x_i, ∇(x_i)⟩ - (3δ^2/2) v_i∇(x_i) + (3ϵδ^2/4 + δ(1+δ/4) + (1+26ϵ^2) δ^2(1+δ^2(1+δ))) v_i^2 + (δϵ/2 +δ^2/4+ δ^3/2+ (1+26ϵ^2) δ^4(1+δ^2(1+δ))) x_i^2 + ( δ(1+3 δ+δ^2ϵ)/2 + (1+26ϵ^2)δ^4(1+δ^2(1+δ))) ϵ^-1∑_j=1^x_i-x_j^2 + 8 (1+δ^2(1+δ)) δ^2ϵ^2 ^-1∑_j=1^v_i-v_j^2 ≤⟨ x_i, v_i⟩-δx_i^2 + δ + δ(1+(3+53ϵ^2)δ) v_i^2 + δ(ϵ/2+ 2δ(2+26ϵ^2 δ^2)) x_i^2 + 2 (1+δ^3(1+26ϵ^2)) δϵ^-1∑_j=1^x_i-x_j^2 + 16 δ^2ϵ^2 ^-1∑_j=1^v_i-v_j^2 ≤⟨ x_i, v_i⟩-δx_i^2/2 + δ + δ(1+7δ) v_i^2 + 4 δϵ^-1∑_j=1^x_i-x_j^2 + 16 δ^2ϵ^2 ^-1∑_j=1^v_i-v_j^2 , where we used in the last inequality that ϵ≤/4 and δ≤ / 40 ≤ 1/40. By <cit.>, (<ref>) and (<ref>), it follows, (y_i) ≤(x_i) + ∇(x_i)-δ^2/2ψ_i(x) + δ v_i + 2^-1y_i-x_i^2 ≤(x_i) -δ^2/2∇(x_i)^2 + δ∇(x_i)v_i + δ^2ϵ/22 x_i^2 + 2 ^-1∑_j=1^x_i-x_j^2 + y_i-x_i^2 ≤(x_i) -δ^2/2∇(x_i)^2 + δ∇(x_i)v_i +δ^2 (ϵ + 2δ^2 ) x_i^2 + 2δ^2 v_i^2 + ϵδ^2 ^-1 (1+2δ^2 ϵ) ∑_j=1^x_i-x_j^2 ≤(x_i) - ^2 δ^2 x_i^2/8 + δ^2 /4 + δ∇(x_i)v_i +δ^2 (ϵ + 2δ^2 ) x_i^2 + 2δ^2 v_i^2 + ϵδ^2 ^-1 (1+2δ^2 ϵ) ∑_j=1^x_i-x_j^2 . Using 𝐚 + 𝐛^2 ≤ (1+ε) 𝐚^2 + (1+ε^-1)𝐛^2 for ε=δ^-1, δ≤/40≤ 1/40 and ϵ≤/4≤ 1/4, and (<ref>), we obtain y_i-y_j^2 = x_i-x_j-δ^2/2{ψ_i(x)-ψ_j(x)}+δ{v_i-v_j}^2 =(1+δ)x_j-x_j^2 +δ^-1(1+δ) δ^2v_i-v_j-δ/2(ψ_i(x)-ψ_j(x))^2 ≤ (1+δ)x_i-x_j^2+2 δ(1+δ) v_i-v_j^2 +δ^3/2(1+δ)[2 x_i-x_j^2 +2 ϵ x_i-x_j^2] ≤ (1+δ(1+δ^2(1+δ)(1+ϵ)))x_i-x_j^2 + δ(1+δ) v_i-v_j ^2 ≤ (1+2δ)x_i-x_j^2 + 2 δv_i-v_j^2 . and (<ref>), p_i-p_j ^2 ⩽v_i-v_j^2(1+δ) +δ^-1(1+δ) δ/2(ψ_i(x)+ψ_j(x)+ψ_i(y)+ψ_j(y))^2 ⩽v_i-v_j^2(1+δ) +δ/4(1+δ){2 2 (ψ_i(x)+ψ_i(y))^2+2ψ_j(x)+ψ_j(y)-ψ_i (x)-ψ_i(y)^2} ≤v_i-v_j^2(1+δ) + 4^-1δ(1+δ)[4^3[2x_i^2 + 2 ϵ^2 ^-1∑_j=1^x_i-x_j^2 ] ] + 4^-1δ(1+δ)[4^3(1+26ϵ^2){δ^4x_i^2 + δ^4 ϵ^2 ^-1∑_j'=1^x_i-x_j'^2 + δ^2 v_i^2 }] + 4^-1δ(1+δ)[4× 2 (1+ϵ){x_i-x_j^2 + y_i-y_j^2 }] +2^6 ϵ^2 δ^3(1+δ) ^-1∑_j'=1^v_i-v_j'^2 ≤v_i-v_j^2(1+δ) + 2^5δ(1+δ)[x_i^2 + ϵ^2/∑_j=1^x_i-x_j^2 ] +2^6 ϵ^2 δ^3(1+δ)/∑_j'=1^v_i-v_j'^2 + 4^2δ(1+δ)(1+26ϵ^2){δ^4x_i^2 + δ^4 ϵ^2 ^-1∑_j'=1^x_i-x_j'^2 + δ^2 v_i^2 } + 2δ(1+δ) (1+ϵ){x_i-x_j^2(1+2δ) + 2δv_i-v_j^2 } ≤v_i-v_j^2(1+δ + 4δ^2(1+δ)(1+ϵ)) +2^6 ϵ^2 δ^3(1+δ)/∑_j'=1^v_i-v_j'^2 + x_i^2 δ(1+δ)2^4[2+(1+26ϵ^2)δ^4] + v_i^2 16 (1+26ϵ^2)δ^3(1+δ) + x_i-x_j^22(1+ϵ)δ(1+δ)(1+2δ) + 4^2 δ(1+δ) ϵ^2 ^-1[2+(1+26ϵ^2)δ^4]∑_j'=1^x_i-x_j'^2 ≤v_i-v_j^2(1+δ + 4δ^2(1+δ)(1+ϵ))+ 48 δx_i^2 + δv_i^2 +2^6 ϵ^2 δ^3(1+δ)/∑_j'=1^v_i-v_j'^2 + 3(1+ϵ) δx_i-x_j^2 + 48 δϵ^2 ^-1∑_j'=1^x_i-x_j'^2 . Combining this last inequality with (<ref>)-(<ref>)-(<ref>)-(<ref>)-(<ref>) in (<ref>), and using that v_i∇(x_i)≤v_i^2/2 + x_i/2, b_0 ≤ b, ϵ≤ 1/4 and δ≤ 1/40 complete the proof. Consider the four real sequences (a_k)_k∈,(b_k)_k∈,(c_k)_k∈,(e_k)_k∈ such as a_0,b_0,c_0,e_0 >0 and define by recursion a_k+1 = (1-δ^2 C_a,1) a_k - δ c_k C_a,2 + δ^2 b_k C_a,3+ δ e_k C_a,4 + C_a,5δ(b_k-b_0), b_k+1 = (1+C_b,1δ + C_b,2δ^2) b_k + δ^2 C_b,3 a_k + δ c_k C_b,4 + δ e_k C_b,5 + δ(b_k-b_0) , c_k+1 = c_k + C_c,1δ a_k , e_k+1 = e_k + C_e,1δ (a_kδ+b_k+c_k+e_k) , for some non-negative real constants {C_□, : □∈{a,b,c,e} ,∈{1,2,3,4,5}} and δ >0. Then, for any δ >0 and k ∈ satisfying δ≤δ̃_1 ∧, kδ≤T̃_1 ∧T̃_2∧T̃, for T̃, >0, with δ̃_1 = [ C_b,2]^-1∧ [a_0 C_a,2 C_c,1/{8(_e,1 C_a,4 + _b,3 ( C_a,3 + C_a,5 ))}] T̃_1 = 4^-1{C̃_b,1^-1∧C̃_b,2^-1/2∧ C_e,1^-1∧ 1}, T̃_2 = 4^-1{[C_a,2 /(4C_b,4 C_a,5)] ∧ [a_0 C_a,2 C_c,1(2 C_a,5_b,2 b_0)^-1]∧ [a_0(C_a,2c_0)^-1]∧ [C_c,1C_a,2]^-1/2}, and 2 e_0 C_a,4≤ c_0 C_a,2/4 , C_a,5(C_b,1 b_0+ C_b,5 e_0) +_e,1 C_a,4≤ C_a,2 C_c,1 a_0 / 8 , it holds a_0/2 ≤ a_k ≤ a_0 + C̃_a,3kδ^2 , b_0 ≤ b_k ≤ 2 b_0 + C̃_b,3 k δ^2 , e_k ≤ 2 e_0 + C̃_e,1 k δ + C̃_e,2 k δ^2 , c_0 + kδ a_0 C_c,1/2 ≤ c_k ≤ c_0 + kδ a_0 C_c,1 + (δ k)^2 δ C_c,1C̃_a,3 b_k ≤ b_0(1+kδC̃_b,1 + (kδ)^2 C̃_b,2) + δ^2 k C̃_b,3 e_k ≤ e_0(1+kδC_e,1) + C̃_e,1 k δ + C̃_e,2 k δ^2 a_k≤ a_0 (1-kδC̃_a,1 -(kδ)^2 C̃_a,2) + δ^2 k C̃_a,3 a_k≥ a_0 (1-kδ C_a,2c_0/a_0 - C_a,2 C_c,1 (kδ)^2 ) c_k ≥ c_0 + C_c,1 kδ a_0 -(kδ)^2 C_c,1 C_a,2c_0 -(kδ)^3 C_c,1^2 C_a,2a_0 , where C̃_a,1 = c_0 C_a,2/(2a_0) , C̃_a,2 = a_0 [ C_a,1∧ C_a,2 C_c,1 / 16] , C̃_a,3 = 2 b_0 C_a,3 + a_0 (_a,1 + _a,2) , C̃_b,1 = C_b,1 + c_0 C_b,4/b_0 + e_0 C_b,5/b_0 , _b,2 = 4[ C_b,1_b,1 + _b,1 + C_b,4(C_c,1 a_0 + _e,1 C_b,5))/b_0] , C̃_b,3 = b_0 C_b,2 + a_0 C_b,3 + (C_b,3_a,3 + b_0 C_b,2_b,1 + C_b,5_e,2 + _a,3 C_c,1 C_b,4) _e,1 =2 C_e,1 (2b_0 + c_0 + (C_e,1 e_0 + C_c,1 a_0 +δ_b,3)) _e,2 = 2 C_e,1 (a_0 + δ_a,3 + ^2 C_c,1_a,3)). The proof is by induction. Equation (<ref>) is trivially true for k=0. Suppose that it is true for k ∈ and suppose δ,k+1 satisfy (<ref>). Then by definition, the induction hypothesis implies that c_0 + (k+1)δ a_0 C_c,1/2 ≤ c_k+1≤ c_0 + (k+1)δ a_0 C_c,1 + (k+1)δ^2 C_c,1C̃_a,3. Similarly, it follows that b_k+1≤ [(1+kδC̃_b,1 + (kδ)^2 C̃_b,2)b_0+C̃_b,3δ^2 k](1+C_b,1δ + C_b,2δ^2) + δ^2 C_b,3(a_0 + _a,3δ^2 k) + δ C_b,4(c_0+δ k C_c,1 a_0 + (δ k )^2 δ C_c,1_a,3) +δ C_b,5(2e_0 + δ k _e,1 + δ^2 k _e,2) ≤ b_0 + δ b_0(k C̃_b,1 + C_b,1 + c_0 C_b,4/b_0+ 2 C_b,5e_0/b_0) + δ^2 k b_0 _b,2 + k δ_b,2C_b,1 + C_b,1_b,1 + _b,1 + C_b,4C_c,1a_0 + _e,1C_b,5/b_0+ k δ (δ_b,2 C_b,2 + _b,2) + δ^2 (k _b,3 + b_0 C_b,2 + a_0 C_b,3 + ^2 _a,3 C_c,1 C_b,4+ C_b,3_a,3 + b_0 C_b,2_b,1 + C_b,5_e,2) ≤ b_0(1+(k+1)δC̃_b,1+k(k+2)δ^2 C̃_b,2) + _b,3 kδ^2 , where we have used that kδ≤ 4^-1 [C_b,1^-1∧ 1], δ≤ C_b,2^-1 and C̃_b,1≥ C_b,1 by definition. In addition, b_0 ≤ b_k+1≤ 2 b_0 + δ^2 k _b,3 since δ,k+1 satisfy (<ref>). Similarly, using that kδ≤ (4 C_e,1)^-1, we have e_k+1≤ [(1+kδ C_e,1)e_0 + δ k _e,1 + δ^2 k _e,2](1+C_e,1δ) + C_e,1δ^2(a_0 + _a,3 k δ^2) + C_e,1δ(2b_0 + δ^2 k _b,3) + C_e,1δ(c_0 + k δ C_c,1a_0 + (kδ)^2 δ C_c,1_a,3) ≤ (1+(k+1)δ C_e,1) e_0 + δ(k _e,1 + C_e,1(2b_0 + c_0 + C_e,1 e_0 + _e,1 + C_c,1 a_0 + δ_b,3)) + δ^2 (k _e,2 + C_e,1(k δ_e,2 + a_0 + k δ^2 _a,3 + (kδ)^2 C_c,1_a,3)) , which concludes the proof of the upper bound for e_k+1. Using the induction hypothesis, in particular the lower bounds on c_k and the definition of _b,1, we get a_k+1 ≤ a_0- δ(a_0k C̃_a,1 + C_a,2 c_0 - 2 C_a,4 e_0 ) - δ^2 a_0 C_a,1 δ^2 (k _a,3 +2b_0 C_a,3 + a_0 (_a,1 + _a,2)) - δ^2 k [k C̃_a,2 a_0 + a_0C_c,1C_a,2/2 -C_a,5_b,1 b_0 - _e,1 C_a,4 -δ k C_a,5_b,2 b_0 - δ^2 C_a,3_b,3 -δ C_a,4_e,1 -δ C_a,5_b,3] ≤ a_0- δ(a_0k C̃_a,1 + C_a,2 c_0 - 2 C_a,4 e_0 - c_0 C_b,4 C_a,5δ k ) - δ^2 a_0 C_a,1 δ^2 (k _a,3 +2b_0 C_a,3 + a_0 (_a,1 + _a,2)) - δ^2 k [k C̃_a,2 a_0 + a_0C_c,1C_a,2/2 -C_a,5 (C_b,1 b_0+ C_b,5 e_0) - _e,1 C_a,4 -δ k C_a,5_b,2 b_0 - δ^2 C_a,3_b,3 -δ C_a,4_e,1 -δ C_a,5_b,3] . Using the condition (<ref>), δ, k+1 satisfy (<ref>), we get that a_k+1≤ a_0 - a_0 (k+1)δC̃_a,1 - a_0 (k+2)k δ^2 C̃_a,2 - a_0 C_a,1δ^2 + δ^2 k C̃_a,3≤ a_0 - a_0 (k+1)δC̃_a,1 - a_0 (k+1)^2δ^2 C̃_a,2 + δ^2 C̃_a,3δ^2 (k+1). The lower bound for a_k+1 proceeds similarly using the upper bound on c_k given by the induction hypothesis. Finally, the conditions (k+1)δ≤ 4^-1{[a_0(C_a,2c_0)^-1]∧ [C_c,1C_a,2]^-1/2} and (<ref>) imply that a_0/2 ≤ a_k+1≤ a_0 + δ^2 k _a,3. The last inequality for c_k+1 in (<ref>) uses the induction hypothesis a_k≥ a_0 (1-kδ C_a,2c_0/a_0 - C_a,2 C_c,1 (kδ)^2 ) and c_k+1=c_k + C_c,1δ a_k. Assume that <Ref> holds. Let = (_0,_̱0,_̧0,2b_0,_0,_0)∈_+^5, , >0 and denote C̅_a,1 = / (3× 2^5) , C̅_a,2 = / (3 × 2^6) , _a,3 = 20 b_0 + a_0(_a,1 + _a,2) , C̅_b,1 = ϵ + [c_0(1+7) + e_0]/b_0 , _b,2,0 = 4[(1+ϵ) _b,1 +(1+7)(2a_0 + _e,1ϵ)/b_0 ] , _b,3,0 = 17b_0 +2a_0 +(2_a,3 + 17 b_0 _b,1 + ϵ_e,2 + 20(1+7)) , _b,2 = _b,2,0 + c_0 /8 + a_0 /4 , _b,3 = _b,3,0 + 20 b_0 + a_0(_a,1 + _a,2) , _e,1 = 37[2b_0 + c_0 + (17 c_0 + 2 a_0 + _b,3)] , _e,2 = 37(a_0+_a,3(_a,3 + 2)) , C̅_f,1 = a_0 + _a,3 + b_0 /2 + (c_0 + T̅ a_0) . Let ϵ≤/4 and suppose that 2 × 48 e_0 ≤ c_0 /2^5 , ϵ[b_0 +e_0 + 48 _e,1] ≤ a_0 /2^5 . Then, for any δ>0, η∈0,1, and K ∈ satisfying δ≤_1 ∧ and Kδ≤_1 ∧_2 ∧T̅, and δη [ 20 b_0 + a_0 (_a,1 + _a,2)] ≤c_0 / 3× 2^4, 2δ(1-η)/δ K[ 20 b_0 +a_0 (_a,1 + _a,2) ]≤a_0 /3 × 2^6, where _1 = 17^-1∧ [/40] ∧ [ a_0 2^-5/(48 ϵ_e,1 + _b,3(1+10))] , _1 = 4^-117^-1∧C̅_b,1^-1∧C̅_b,2,0^-1/2, _2 =4^-1[2^-5/(1+7)]∧[ 8^-1 a_0 /(_b,2 b_0)]∧ a_0(8c_0)^-1∧ 2. it holds * For any i ∈∬1, x,v ∈, _δ^K (x,v) ≤[_K][i](x,v), where _K = (_K,_K,_K,2b_0,_K,_K) _K = a_0 -KδC̅_a,1c_0 - (Kδ)^2 C̅_a,2 a_0 , _K = b_0 (1+KδC̅_b,1 + (Kδ)^2 C̅_b,2) + δ^2 K _b,3, _K = c_0 + 2 K δ a_0 , _K = _0(1 +17 K δ) + _e,1 Kδ + _e,2 Kδ^2 , _K = f_0 + K δC̅_f,1. * In addition for any k ∈{0,…,K}, i ∈∬1, x,v ∈, _δ^k (x,v) ≤[_k][i](x,v), where _k = (_k,_k,_k,2b_0,_k,_k) _k = a_0 + (kδ)^2c_0 /8 + a_0 (k δ)^3/4 +2_a,3δ^2 k , _k = 2b_0 + δ^2 k _b,3, _k = c_0 + 2 k δ a_0 , _k = 2 _0 + _e,1 kδ + _e,2 kδ^2 , _k = f_0 + k δC̅_f,1. * Let i∈{1,…,}, δ≤_1, K ∈, Kδ≤∧_1∧_2 and x,v ∈^d. Let (a_0,b_0,c_0,e_0,f_0) ∈_+^5. By <Ref>, we have starting from _0 = _0 = (a_0,b_0,c_0,2b_0,e_0,f_0), for any k∈, _δ^k+1[_0][i](x,v) ≤_δ[_k][i](x,v) ≤[_k+1][i](x,v) , where _k+1 = (_k+1,_k+1,_k+1,b_0,_k+1,_k+1) are defined by the recursion _k+1 = ã_k(1-δ^2/2) + 10 b̃_k δ^2 - δc̃_k/8 + 48 ϵẽ_k δ + δb̃_k-b_0 _k+1 = 2 δ^2 _k + (1+ϵδ + 17 δ^2)_k + δ(1+7 δ) _k + ϵ_k δ + δb̃_k-b_0 _k+1 = 2 δ_k + _k , _k+1 = _k + 17 δ(_kδ +_k+_k+_k) _k+1 = _k +_k δ^2 + _kδ^2 /4 + δc̃_k . Applying <Ref>, we get that for any k ∈∬0K, _k ≤ a_0 (1- kδ c_0/(16 a_0) - (kδ)^2/2^6) + (20 b_0 + a_0(_a,1 + _a,2)) δ^2 k _k ≤ b_0 (1+kδC̅_b,1 + (kδ)^2 C̅_b,2,0) +_b,3,0 kδ^2 _k - c_0 - 2kδ a_0 ≤ (kδ)^2c_0 /4 + a_0 (k δ)^3/2 + 2 (20 b_0 + a_0(_a,1 + _a,2)) δ^2 k _k ≤_0(1 + 17k δ) + kδ_e,1 + kδ^2 _e,2, _k≤ f_0 + δ k C̅_f,1. Using that [c_k-c_0 - 2 δ k a_0]vx≤ 2^-1c_k-c_0 - 2 δ k a_0[x^2 + v^2], (<ref>) and K δ≤ 3 × 2^-5 completes the proof. * The proof of the second statement follows the same lines and is omitted. Assume that <Ref> holds and that ϵ≤/4 and consider the notations introduced in <Ref>. Then, for any i ∈∬1, = (δ,K,η), δ >0, K ∈, η∈0,1, = (_0,_̱0,_̧0,2b_0,_0,_0)∈_+^5, satisfying δ≤_1, Kδ≤∧T̅_1 ∧_2 with >0, (<ref>)-(<ref>), c_0(η-1)+2 η a_0 δ K = 0 , c_0 ≤ 2 √(a_0b_0), c_0/b_0 ≤η(1-η)/(δ K) , _b,1 -c_0/b_0 ≤ (1-η)/[4δ K ] , δ K [ _b,1 + _b,2] ≤ (1-η)/[4δ K ] , it holds * For any x=(x_1,…,x_),v=(v_1,…,v_),g=(g_1,…,g_) ∈, _δ^K (x,η v + g ) ≤ (1- ρ̅_Kδ)[a_0 x_i^2 + b_0 v_i^2 +c_0x_iv_i + 2 b_0 (x_i)] +f_K +2 η b_Kv_ig_i + c_Kxg + ^2 _̱K g_i^2 + e_K ϵ/∑_j=1^[x_i-x_j^2 + v_i-v_j+g_i-g_j^2] , where b_K ≤ 2b_0 + δ^2K _b,3, e_K ≤ 2 e_0 + (Kδ) _e,1 + Kδ^2 _e,2, ^2 = 1- η^2 ρ̅_ = [(c_0 + Kδ a_0) C̅_a,2 /(4a_0)] ∧ [(1-η)/(4Kδ)] ∧ [ (c_0 + Kδ a_0) C̅_a,2/(4b_0a_0)] . * In addition for any k ∈{1,…,K}, x=(x_1,…,x_),v=(v_1,…,v_),g=(g_1,…,g_) ∈, _δ^k (x,η v + g ) ≤_1 [a_0 x_i^2 + b_0 v_i^2 +c_0x_iv_i + 2 b_0 (x_i)] +f_K +2 η b_Kv_ig_i + c_Kxg + ^2 _̱K g_i^2 + e_K ϵ/∑_j=1^[x_i-x_j^2 + v_i-v_j+g_i-g_j^2] , where _1 = [{a_0 + 2 c_0 + a_0 + ^2c_0 /8 + a_0 ^3/4 +2_a,3}/a_0] ∨ [{2b_0 + 2c_0 + a_0 + _b,3}/b_0] . Note that the condition (<ref>) is in particular satisfied choosing b_0=1, a_0 = (1-η)^2/[2(δ K)^2] and c_0= η (1-η)/(δ K). * Let δ >0, K ∈, η∈0,1, = (_0,_̱0,_̧0,2b_0,_0,_0)∈_+^5, satisfying δ≤_1 ∧, Kδ≤∧T̅_1 ∧_2, (<ref>)-(<ref>)-(<ref>). For any x=(x_1,…,x_),v=(v_1,…,v_),g=(g_1,…,g_) ∈, using <Ref>-<ref>, we get _δ^K (x,η v + g ) ≤ a_K x_i^2 + b_K η^2 v_i^2 + c_Kηx_iv_i + 2 b_0 (x_i) +f_K +2 η b_Kv_ig_i + ^2 b_kg_i^2 + c_Kx_ig_i + ϵ e_K/∑_j=1^ [x_i-x_j^2+ v_i-v_j+g_i-g_j^2]. Note that by (<ref>) and using c_K = c_0 +2 Kδ a_0, η c_K xv = c_0xv. Then under <Ref>, since U(0)=0, U(x) ≤x^2, and using (<ref>), b_Kη^2 = (1+C̅_b,1 Kδ + C̅_b,2(Kδ)^2) b_0 (1-(1+η)(1-η))≤ b_0 (1- δ K(1-η)/[2δ K]) , we get setting _a,4 = Kδ_a,2(c_0 + δ K a_0)/a_0, _δ^K (x,η v + g ) ≤ (1-C̅_a,4/2)a_0 x_i^2 + δ K(1-η)v_i^2/[2 K δ] + c_0 x_iv_i + (1-C̅_a,4/(4 b_0))2 b_0 (x_i) +f_K + 2 η b_Kv_ig_i + ^2 b_Kg_i^2 + c_Kx_ig_i + ϵ e_K/∑_j=1^ [x_i-x_j^2+v_i-v_j+g_i-g_j^2] ≤ (1-C̅_a,4/2)a_0 x_i^2 + δ K(1-η)v_i^2/[2 K δ] + c_0(1- ρ̅_Kδ) x_iv_i + ρ̅_Kδ c_0xv + (1-C̅_a,4/(4 b_0))2 b_0 (x_i) +f_K + 2 η b_Kv_ig_i + ^2 b_Kg_i^2 + c_Kx_ig_i + ϵ e_K/∑_j=1^ [x_i-x_j^2+ v_i-v_j+g_i-g_j^2], which completes the proof using the Cauchy-Schwarz inequality and the condition c_0 ≤ 2 √(a_0b_0). * The proof of the second statement follows the same lines using <Ref>-<ref> instead of <Ref>-<ref>. Let a∈_+, ℓ∈ and G be a d-dimensional zero-mean Gaussian random variable with covariance matrix identity. Then, for any x ∈^d, [(a+Gx)^ℓ] ≤ a^ℓ-2ℓ/2a^2+_ℓx^2^ℓ/2, where _ℓ = ^2(ℓ/2)^2 ℓ/2^-1_2ℓ/2^1/ℓ/2, and _ℓ' is the ℓ'-th moment of the zero-mean one-dimensional Gaussian distribution with variance 1. Expanding (a+Gx)^ℓ and using that Gx is a zero-mean Gaussian random variable with variance x^2, we get [(a+Gx)^ℓ] = ∑_k=0^ℓℓk a^ℓ-k[Gx^k] = ∑_k=0^ℓ/2ℓ2kx^2k a^ℓ-2k_2k^2k/2k ≤ a^ℓ-2ℓ/2∑_k=0^ℓ/2ℓ2kx^2k a^2(ℓ/2-k)_2ℓ/2^2k/2ℓ/2, where we apply Jensen inequality for the last step. Using that ℓ2k≤ (ℓ/(2k))^2k≤ (^2 ℓ/2)^k(ℓ/(2k))^k ≤ (^2 (ℓ/2)^2 ℓ/2^-1)^kℓ/2k, we get [(a+Gx)^ℓ] ≤ a^ℓ-2ℓ/2∑_k=0^ℓ/2 (^2 (ℓ/2)^2 ℓ/2^-1)^kℓ/2kx^2k a^2(ℓ/2-k)_2ℓ/2^2k/2ℓ/2 ≤ a^ℓ-2ℓ/2a^2+^2(ℓ/2)^2 ℓ/2^-1x^2 _2ℓ/2^1/ℓ/2^ℓ/2, which completes the proof. Define for any x=(x_1,…,x_),v=(v_1,…,v_)∈ and ℓ∈, (x,v) = ∑_i=1^^ℓ(x,v). Assume that <Ref> holds and that ϵ≤/4. Then, for any = (K,δ,η), K ∈, δ >0, η∈0,1, = (_0,_̱0,_̧0,2b_0,_0,_0)∈_+^5, satisfying δ≤_1∧, Kδ≤∧T̅_1 ∧_2 with , >0, (<ref>)-(<ref>)-(<ref>), it holds: * For any x=(x_1,…,x_),v=(v_1,…,v_)∈ and ℓ∈, _^ℓ(x,v) ≤ (1-ρ̅_Kδ/4)^ℓ[a_0 x_i^2 + b_0v_i^2 + c_0 x_iv_i + 2b_0 (x_i)]^ℓ + 4^ℓℓ/2_ℓ[(a_0 _1 + b_0 (_2+2_1) + c_0 _1^1/2_2^1/2)^ℓ + _ℓ^ℓ] + 12^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1[^2ℓ b_K^ℓ q^ℓ_2ℓ + 8^ℓ-1ϵ^ℓ e_K^ℓq^ℓ_2ℓ + f_K^ℓ] +12^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1ϵ^ℓ e_K^ℓ^-1∑_j=1^x_i-x_j^2 + v_i-v_j^2^ℓ, where _1 = 2 c_K^2 ^2/(a_0^2ρ̅_Kδ), _2 = 32 (η )^2/(ρ̅_Kδ), _ℓ = [4 (η b_K)^2 _2 + 2 (c_K )^2_1]_ℓ. * In addition, if for ℓ∈ ϵ≤ℓ^1/(ℓ-1)( 1-ρ̅_Kδ/4) , 24 e_K ϵ (1+4/(ρ̅_Kδ))^(ℓ-1)/ℓ≤ρ̅_Kδ/8 , for any x=(x_1,…,x_),v=(v_1,…,v_)∈, _(x,v) ≤ (1-ρ̅_Kδ/8)^ℓ∑_i=1^[a_0 x_i^2 + b_0v_i^2 + c_0 x_iv_i + 2b_0 (x_i)]^ℓ + 4^ℓℓ/2_ℓ[(a_0 _1 + b_0 (_2+2_1) + c_0 _1^1/2_2^1/2)^ℓ + _ℓ^ℓ] + 12^ℓ-1 (1+4/(ρ̅_Kδ))^ℓ-1[^2ℓ b_K^ℓ q^ℓ_2ℓ + 8^ℓ-1ϵ^ℓ e_K^ℓq^ℓ_2ℓ + f_K^ℓ] . * Finally, for any k ∈{0,…,K}, x=(x_1,…,x_),v=(v_1,…,v_)∈ and ℓ∈, _η_δ^k (x,v) ≤_2^ℓ∑_i=1^ [a_0 x_i^2 + b_0v_i^2 + c_0 x_iv_i + 2b_0 (x_i)]^ℓ + 12^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1[^2ℓ b_K^ℓ q^ℓ_2ℓ + 8^ℓ-1ϵ^ℓ e_K^ℓq^ℓ_2ℓ + f_K^ℓ] , where _2 = (1+ρ̅_Kδ/4) [_1+ ( [4 (η b_K)^2 b_0^-1 ] ∨ [(c_K )^2 a_0^-1])] + 24(1+4/(ρ̅_Kδ))^(ℓ-1)/ℓϵ e_K . Let δ≤_1∧, Kδ≤∧T̅_1 ∧_2 with , >0, satisfying (<ref>)-(<ref>)-(<ref>). * Consider x,v ∈ and ℓ∈. Using <Ref>-<ref> we have, for any g=(g_1,…,g_) ∈^d, _δ^K ^ℓ(x,η v + g ) ≤ (1+ρ̅_Kδ/4)^ℓ-1{A +2η b_Kv_ig_i + c_K x_ig_i]^ℓ + 12^ℓ-1(1+4/ρ̅_Kδ)^ℓ-1[^2ℓ b_K^ℓg^2ℓ + ϵ^ℓ e_K^ℓ^-1∑_j=1^g_i-g_j^2ℓ +f_K^ℓ] +12^ℓ-1(1+4/ρ̅_Kδ)^ℓ-1ϵ^ℓ e_K^ℓ^-1∑_j=1^[x_i-x_j^2 + v_i-v_j^2^ℓ, where we have set A=(1-ρ̅_Kδ)[a_0 x_i^2 + b_0 v_i^2 + c_0x_iv_i + 2b_0 (x_i)]. Then, taking expectation and <Ref> imply that _^ℓ(x,v) ≤ (1+ρ̅_Kδ/4)^ℓ-1A^ℓ-2ℓ/2B^ℓ/2 + 12^ℓ-1(1+4/ρ̅_Kδ)^ℓ-1[^2ℓ b_K^ℓ q^ℓ_2ℓ + 8^ℓ-1ϵ^ℓ e_K^ℓq^ℓ_2ℓ+f_K^ℓ] +12^ℓ-1(1+4/ρ̅_Kδ)^ℓ-1ϵ^ℓ e_K^ℓ^-1∑_j=1^[x_i-x_j^2 + v_i-v_j^2]^ℓ. with B= A^2 +[4(η b_K)^2 v_i^2 + (c_K)^2 x_i^2]_n. For x^2 ≥_1 and v^2 ≥_2, we get (1+ρ̅_Kδ/4)^ℓ-1A^ℓ-2n/2B^n/2 ≤ (1+ρ̅_Kδ/4)^ℓA^ℓ-2n/2((1-ρ̅_Kδ/2)[a_0 x_i^2 + b_0 v_i^2 + c_0x_iv_i + 2b_0 (x_i)]^2)^n/2 ≤ (1-ρ̅_Kδ/4)^ℓ[a_0 x_i^2 + b_0 v_i^2 + c_0x_iv_i + 2b_0 (x_i)]^ℓ. Using that (t+s)^ℓ' -t^ℓ'≤ℓ' s(t+s)^ℓ'-1, for s,t ≥ 0 and ℓ' ≥ 1, completes the proof. * The proof of the last statement is direct consequence of the first one since by Young's inequality, we have _(x,v) ≤ (1-ρ̅_Kδ/4)^ℓ∑_i=1^[a_0 x_i^2 + b_0v_i^2 + c_0 x_iv_i + 2b_0 (x_i)]^ℓ + 4^ℓℓ/2_ℓ[(a_0 _1 + b_0 (_2+2_1) + c_0 _1^1/2_2^1/2)^ℓ + _ℓ^ℓ] + 12^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1 [^2ℓ b_K^ℓ q^ℓ_2ℓ + 8^ℓ-1ϵ^ℓ e_K^ℓq^ℓ_2ℓ + f_K^ℓ] +2 × 24^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1ϵ^ℓ e_K^ℓ∑_i=1^[x_i^2 + v_i^2 ]^ℓ, and using (1-ρ̅_Kδ/4)^ℓ + 2 × 24^ℓ-1[a_0^-1∨ b_0^-1]^ℓ(1+4/(ρ̅_Kδ))^ℓ-1ϵ^ℓ e_K^ℓ≤ (1-ρ̅_Kδ/4)^ℓ + ℓ(1-ρ̅_Kδ/4)^ℓ-1[24e_K ϵ [a_0^-1∨ b_0^-1] (1+4/(ρ̅_Kδ))^(ℓ-1)/ℓ] ≤ (1-ρ̅_Kδ/4+24e_K ϵ [a_0^-1∨ b_0^-1] (1+4/(ρ̅_Kδ))^(ℓ-1)/ℓ)^ℓ by (<ref>). * Following the same lines as in the proof <ref> using <Ref>-<ref> instead of <Ref>-<ref> we have _η_δ^k ^ℓ(x,v) ≤_2,0^ℓ [a_0 x_i^2 + b_0v_i^2 + c_0 x_iv_i + 2b_0 (x_i)]^ℓ + 4^ℓℓ/2_ℓ[(a_0 _1 + b_0 (_2+2_1) + c_0 _1^1/2_2^1/2)^ℓ + _ℓ^ℓ] + 12^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1[^2ℓ b_K^ℓ q^ℓ_2ℓ + 8^ℓ-1ϵ^ℓ e_K^ℓq^ℓ_2ℓ + f_K^ℓ] + 12^ℓ-1(1+4/(ρ̅_Kδ))^ℓ-1ϵ^ℓ e_K^ℓ^-1∑_j=1^x_i-x_j^2 + v_i-v_j^2^ℓ, with _2,0 = (1+ρ̅_Kδ/4) [_1+ ( [4 (η b_K)^2 b_0^-1 ] ∨ [(c_K )^2 a_0^-1]) ] . The proof is then easily completed. Taking a_0 = γ_0^2/2 , b_0 = 1 , c_0 = ηγ_0, γ_0 = (1-η)/(δ K) , f_0 = 0 , e_0 = 0 . As mentioned in <Ref>, the first condition in (<ref>) is satisfied for this case. In addition, for this choice of parameter, the constants appearing in <Ref> become: C_a,1 = / (3× 2^5) , C_a,2 = / (3 × 2^6) , _a,3 = 20 + γ^2_0 ( / (3× 2^5) + / (3× 2^6))/2 , C_b,1 = ϵ + ηγ_0(1+7) , _b,2,0 = 4[(1+ϵ) _b,1 +(1+7)(γ_0^2 + _e,1ϵ) ] , _b,3,0 = 17 +γ^2_0 +(2_a,3 + 17 _b,1 + ϵ_e,2 + 20(1+7)) , _b,2 = _b,2,0 + ηγ_0 /8 + a_0 /4 , _b,3 = _b,3,0 + 20 + 2 _a,3, _e,1 = 37[2 + ηγ_0 + (17 ηγ_0 + γ_0^2 + _b,3)] , _e,2 = 37(γ_0^2+_a,3(_a,3 + 2)) , C_f,1 = γ_0^2 /2 +_a,3+ /2 + (ηγ_0 + T̅γ_0^2/2) . Then, with these notations, δ_1,_1,_2 in (<ref>) can be written as δ_1 = 17^-1∧ [/40] ∧ [ γ_0^2 2^-6/(48 ϵ_e,1 + _b,3(1+10))] , T_1 = 4^-117^-1∧_b,1^-1∧_b,2,0^-1/2, T_2 =4^-1[2^-5/(1+7)]∧[ 2^4 γ^2_0 /_b,2]∧γ_0/2^4 ∧ 2. and the condition (<ref>)-(<ref>) are equivalent to ϵ≤ϵ_1 , ϵ_1 = γ^2_0 2^-6/ [1 + 48 _e,1] , and δ≤δ_2 , δ_2 = γ_0 (3× 2^8)^-1/[ 20 +γ_0^2 (_a,1 + _a,2)/2 ] . In addition, b_K ≤b̅, b̅ = 2 + C_b,3, e_K ≤ Kδe̅, e̅ = ( _e,1 + _e,2) . Furthermore, the second line of (<ref>) is equivalent to δ≤ 1/(7× 8) and Kδ≤ T_3 , T_3= 4^-1γ_0 /[(1-η^2)^1/2_b,1 + _b,2] . Then, the proof is an easy consequence of <Ref> with ρ_ given by (<ref>). Condition (<ref>) is equivalent using (<ref>) to ϵ≤ϵ_2 , ϵ_2 = [ℓ^1/(ℓ-1)( 1-ρ̅_Kδ/4)] ∧ [ (8 × 24)^-1ρ̅_/{e̅(1+4/(ρ_))^(ℓ-1)/ℓ}]. and (<ref>)-(<ref>) can be written using (<ref>) and 1+η≤ 2, as D_1 = 16 γ_0 ρ_^-1{^2 + ^2 γ_0^2 }, D_2 = 32 × 2 ^2 γ_0/ρ_, E_ℓ = δ K F_ℓ, F_ℓ = 8 γ_0 [^2 (2+ C_b,3) D_2 + (γ_0^2 ^2 + γ_0^4 ^2)D_1] _ℓ, C_2 = (1+ρ_/4) [C_1+ ( [8 γ_0 ^2 (2+ C_b,3) ] ∨ [16 γ_0 ( ^2 + γ_0^2 ^2) ])] + 24 (C_e,1 + C_e,2)(4+ ρ_)^(ℓ-1)/ℓ, C_1 = [2 {(1+ + ^3/4)γ_0^2/2 + 2(1+^2/8) γ_0+2C_a,3}/γ_0^2] ∨ [2+2γ_0 + γ_0^2/2+ C_b,3] , where _ℓ is defined in (<ref>). It completes the proof of (<ref>) with C_1,ℓ = C_2 , C_2,ℓ = 4^ℓℓ/2 F_ℓ[(a_0 D_1 + b_0 (D_2+2D_1) + c_0 D_1^1/2D_2^1/2)^ℓ + ^ℓF_ℓ^ℓ] , C_3,ℓ = 2× 12^ℓ-1ρ_^1-ℓ(4+ρ_)^ℓ-1[4^ℓγ_0^ℓ(2+ C_b,3)_2ℓ + 8^ℓ-1ϵ^ℓ(C_e,1+ C_e,2)^ℓ_2ℓ+ C_f,1] , where _ℓ' is the ℓ'-th moment of the zero-mean one-dimensional Gaussian distribution with variance 1.
http://arxiv.org/abs/2306.11245v1
20230620025651
Quantum simulation of Hofstadter butterfly with synthetic gauge fields on two-dimensional superconducting-qubit lattices
[ "Wei Feng", "Dexi Shao", "Guo-Qiang Zhang", "Qi-Ping Su", "Jun-Xiang Zhang", "Chui-Ping Yang" ]
quant-ph
[ "quant-ph" ]
School of Physics, Hangzhou Normal University, Hangzhou 311121, China School of Physics, Hangzhou Normal University, Hangzhou 311121, China School of Physics, Hangzhou Normal University, Hangzhou 311121, China School of Physics, Hangzhou Normal University, Hangzhou 311121, China Corresponding author: [email protected] School of Physics, Zhejiang University, Hangzhou 310027, China Corresponding author: [email protected] School of Physics, Hangzhou Normal University, Hangzhou 311121, China Motivated by recent realizations of two-dimensional (2D) superconducting-qubit lattices, we propose a protocol to simulate Hofstadter butterfly with synthetic gauge fields in superconducting circuits. Based on the existing 2D superconducting-qubit lattices, we construct a generalized Hofstadter model on zigzag lattices, which has a fractal energy spectrum similar to the original Hofstadter butterfly. By periodically modulating the resonant frequencies of qubits, we engineer a synthetic gauge field to mimic the generalized Hofstadter Hamiltonian. A spectroscopic method is used to demonstrate the Hofstadter butterfly from the time evolutions of experimental observables. We numerically simulate the dynamics of the system with realistic parameters, and the results show a butterfly spectrum clearly. Our proposal provides a promising way to realize the Hofstadter butterfly on the latest 2D superconducting-qubit lattices and will stimulate the quantum simulation of novel properties induced by magnetic fields in superconducting circuits. Quantum simulation of Hofstadter butterfly with synthetic gauge fields on two-dimensional superconducting-qubit lattices Chui-Ping Yang July 31, 2023 ======================================================================================================================== § INTRODUCTION Quantum simulation is promising to efficiently solve many-body problems of complex quantum systems <cit.>. In general, there are two strategies in quantum simulation: one is digital quantum simulation based on quantum logic gates, and the other is analog quantum simulation which directly mimics the target Hamiltonian with controllable quantum systems <cit.>. Compared with digital quantum simulation, analog quantum simulation is not universal, but it has higher tolerance level of errors <cit.>. For many problems that require only qualitative but not quantitative results, analog quantum simulations can give meaningful results even if they are affected by quantum decoherence. Experimental demonstrations of analog quantum simulation have already been performed on a number of platforms, such as optical lattices, trapped ions, and so on <cit.>. It is expected that special-purpose analog quantum simulators for some practical tasks will be realized in the near future. Hofstadter butterfly is a famous fractal energy spectrum that was first predicted in theory by Hofstadter in 1976 <cit.>. The original Hofstadter butterfly was found in a system of electrons on a two-dimensional (2D) square lattice exposed to a perpendicular magnetic field. In general, the experimental observation requires that the characteristic length of the magnetic field is comparable to the lattice constant. The estimated strength of the magnetic field required to observe the Hofstadter butterfly for conventional solid-state crystals is on the order of 10^4T, which is too high to be experimentally realized. One way to overcome this problem is to use graphene superlattices with larger lattice constant, which can reduce the required strength of magnetic fields <cit.>. On the other hand, the Hofstadter butterfly can be demonstrated by analog quantum simulation with synthetic gauge fields in artificial systems. When a charged particle moves on a closed loop under a magnetic field, its wave function will obtain a geometric phase related to the magnetic flux through the loop. Synthesizing a gauge field can be equated to trying to make a neutral particle acquire a non-trivial phase after one revolution around a closed loop. The synthesis of gauge fields in artificial quantum systems is of great importance to simulate magnetic effects including the Hofstadter butterfly. In recent years, the implementation of synthetic gauge fields in artificial controllable systems (e.g., cold atoms <cit.>, cavity QEDs <cit.>, superconducting circuits <cit.>) and related quantum simulations have become a hot research area. Simultaneously, many protocols have been proposed to mimic the Hofstadter model with synthetic gauge fields in platforms such as optical lattices with ultracold atoms <cit.>, trapped ions <cit.>, and polaritonic systems <cit.>. In experiments <cit.>, researchers provided evidence that they have engineered the Hofstadter Hamiltonian, but the fractal energy spectrum could not be measured directly. Superconducting circuits provide an excellent platform to perform quantum simulations due to their flexibility and scalability <cit.>. Recently, significant technological breakthroughs have been made in superconducting circuits. In particular, the experimental realization of 2D architectures, such as square <cit.>, zigzag <cit.>, ladder <cit.>, and hyperbolic <cit.> lattices, makes it possible to simulate 2D lattice models using superconducting circuits. For the simulation of Hofstadter butterfly on 2D superconducting-qubit lattices, a qubit flip interaction is analog to the hopping of an electron. The key problem is how to create the synthetic gauge fields. One method for engineering the synthetic gauge field in superconducting circuits is based on the periodic modulation of interaction strengths <cit.>, also named as parametric process, which however can not be applied on the recent 2D superconducting-qubit lattices <cit.>. By contrast, another method is based on the periodic modulation of on-site energies, which is easy to be realized for superconducting transmon qubits and has been adopted in the protocols of generating synthetic gauge fields on ladder lattices <cit.>. Although a reduced 1D Harper Hamiltonian <cit.> has been simulated using a chain of superconducting qubits <cit.>, the simulation of 2D Hofstadter Hamiltonian in superconducting circuits is still lacking. In this paper, motivated by experimental realizations of 2D lattices in superconducting qubit circuits <cit.>, we theoretical study the Hofstadter butterfly with synthetic gauge fields on 2D superconducting-qubit lattices. It is difficult to realize a uniform gauge field on the existing square lattices of superconducting qubits. Therefore, we generalize the original Hofstadter model from square lattices to zigzag lattices, and we find a fractal energy spectrum similar to the original Hofstadter butterfly. To mimic the generalized Hofstadter Hamiltonian, we design a modulation scheme based on the zigzag superconducting-qubit lattices previously reported. By modulating the qubit frequencies with appropriate initial phases and amplitudes, we engineer an artificial magnetic field and realize the Hofstadter Hamiltonian on zigzag superconducting-qubit lattices. Further, we resolve the fractal energy spectrum from time evolutions of experimental observables. We also numerically simulate the evolution process based on the original time-dependent Hamiltonian of the superconducting-qubit lattice. With a group of realistic parameters, a simulated spectrum resembles a butterfly obviously, which confirms our scheme is feasible with existing superconducting-qubit lattices. This paper is organized as follows: In Sec. <ref>, we introduce the Hofstadter model on zigzag lattices and show its fractal energy spectrum. In Sec. <ref>, we present the modulation scheme for generating artificial gauge fields and mimicking the Hofstadter Hamiltonian on zigzag superconducting-qubit lattices. In Sec. <ref>, we give a method to detect the fractal energy spectrum from time evolutions of experimental observables and discuss the experimental feasibility with numerical simulations. Finally, we make a conclusion in Sec. <ref>. § MODEL We consider a zigzag spin lattice, which is experimentally achievable in superconducting circuits <cit.>. As shown in Fig. <ref>, each spin is coupled to four nearest neighbors and the coupling coefficients are complex numbers with spatially varying phases. The system can be described by a Hofstadter-like Hamiltonian (assuming ħ=1) H=J∑_n[e^inϕσ_n^+σ_n+2^-+e^-i(n+1)ϕσ_n^+σ_n+1^-]+H.c., where σ_n^+(σ_n^-) is the raising (lowering) operator of the nth spin, and J represents the coupling strength between spins. This Hamiltonian commutes with the total excitation ∑_nσ_n^+σ_n^- and the system is energy-conserving. In the single-excitation manifold, i.e., only one spin is up and all others are down, this spin-flip model maps onto a free hopping model of a charged particle in a lattice under magnetic fields. For a triangle formed by three spins n, n+1, and n+2, the associated magnetic flux (3n+3)ϕ is spatially dependent. However, the flux through rhombic unit cells, as shown in Fig. <ref>, is a constant: Φ=3ϕ. When Φ/2π is rational, i.e., Φ/2π=p/q with p and q two coprime integers, the Hamiltonian is of period q. In the momentum space, the dispersion E(k) has q bands with the wave vector k restricted to a magnetic Brillouin zone [-π/q,π/q]. By Fourier transform σ_n^+=∑_k,ve^-in(k+vϕ)σ_k,v^+, with v the band index, the Hamiltonian is transformed to H=∑_kH_k, where H_k=J∑_v[e^i(k+vϕ)+e^-i2(k+vϕ)]σ_k,v^+σ_k,v+1^-+H.c.. In contrast to the cosine modulation of the on-site energies in the original Harper Hamiltonian <cit.> H_Harper=J∑_nσ_n^+σ_n+1^-+σ_n+1^+σ_n^-+2cos(nΦ)σ_n^+σ_n^-, the coupling strengths between the neighbouring bands in H_k periodically vary with the magnetic flux Φ as |e^i(k+vϕ)+e^-i2(k+vϕ)|=√(2+2cos(3k+vΦ)). We solve the model by diagonalizing a N× N matrix with N the number of the spins. The eigenenergy spectrum versus the magnetic flux is plotted in Fig. <ref>(a). For comparison, an original Hofstadter butterfly obtained by diagonalizing H_Harper in Eq. (<ref>) is plotted in Fig. <ref>(b). One can see that the fractal energy spectrum of the zigzag lattice is very similar to the original Hofstadter butterfly, despite small difference at, e.g., Φ=0. The small difference of the energy spectra can be clearly explained through the Hamiltonian matrices of H_k and H_Harper. When Φ=0, the matrix trace of H_Harper is 2N (in units of J) while the matrix trace of H_k is zero, which is confirmed by eigenenergy data shown in Fig. <ref>. § SYNTHETIC GAUGE FIELD The coupling strength between superconducting qubits is usually a real number due to time-reversal symmetry. We can realize the effective complex-valued coupling in Eq. (<ref>) by modulating resonant frequencies of qubits. The original Hamiltonian of N frequency-tunable qubits arranged in a zigzag lattice is <cit.> H_o=∑_nω_n(t)σ_n^+σ_n^-+g∑_n(σ_n+1^+σ_n^-+σ_n+2^+σ_n^-)+H.c., where σ_n^+(σ_n^-) denotes the raising (lowering) operator of the nth qubit Q_n, and g is the original real-valued coupling strength between qubits. The resonant frequency of Q_n is periodically modulated according to ω_n(t)=ω̅_n+ε_ncos(ν_nt+θ_n), where ω̅_n is the central frequency, and ε_n, ν_n, and θ_n are the modulation amplitude, frequency, and initial phase, respectively. Under the modulation, each qubit generate a bunch of equally spaced sidebands with a spacing equalling to the modulation frequency ν_n <cit.>. We set ν_1=Δ_13, ν_2=Δ_12, ν_3=Δ_23 and repeat the settings ω̅_n=ω̅_j and ν_n=ν_j for n≡ j (mod 3) with j=1,2,3 (see Fig. <ref>), where Δ_mn=ω̅_m-ω̅_n. In these settings, sideband transitions with tunable qubit-qubit interactions can be implemented. Under the condition of large detuning |Δ_mn|≫ g, high-frequency oscillating terms can be neglected and the effective Hamiltonian in the interaction picture can be written as <cit.> H_eff=∑_nJ_n,n+1σ_n+1^+σ_n^-+J_n+2,nσ_n^+σ_n+2^-+H.c., where the effective coupling strengths J_n,n+1 and J_n+2,n are given by J_n,n+1={[ g𝒥_0(α_n)𝒥_1(α_n+1)e^iθ_n+1,n1,2(3),; g𝒥_0(α_n)𝒥_-1(α_n+1)e^-iθ_n+1, n≡3(mod3), ]. J_n+2,n={[ g𝒥_0(α_n+2)𝒥_-1(α_n)e^-iθ_n, n≡1(mod3),; g𝒥_0(α_n+2)𝒥_1(α_n)e^iθ_n, n≡2,3(mod3), ]. with 𝒥_m(α_n) being the mth-order Bessel function of the first kind. Both magnitudes and phases of the above effective coupling strengths can be conveniently tuned by changing α_n=ε_n/ν_n and θ_n. If we set θ_n={[ π-nϕ, n≡1(mod3),; nϕ, n≡2,3(mod3), ]. and all α_n to be the same value, the effective Hamiltonian H_eff becomes the Hofstadter-like Hamiltonian in Eq. (<ref>). In summary, based on the original Hamiltonian of the superconducting-qubit system in Eq. (<ref>), we periodically modulate the resonant frequency of each qubit according to Eq. (<ref>). By finely setting the modulation parameters ω̅_n, ε_n, ν_n, and θ_n, we approximatively obtain the Hofstadter-like Hamiltonian in Eq. (<ref>) under the large-detuning condition |Δ_mn|≫ g. The parameter setting shown above is specially designed for synthesizing the uniform gauge field in the Hofstadter-like Hamiltonian. Other kinds of synthetic gauge fields can be engineered by changing the parameter setting. § OBSERVING FRACTAL ENERGY SPECTRUM The energy spectrum of an interacting many-body system can be revealed by probing the system's dynamical responses to perturbations <cit.>. If the eigenenergies { E_j} and the corresponding eigenstates {|ψ_j⟩} of a fixed Hamiltonian are given, the state time evolution of the system can be expressed as |Ψ(t)⟩ =∑_jc_je^-iE_jt|ψ_j⟩ , where c_j=⟨ψ_j|.Ψ(0)⟩, and |Ψ(0)⟩ is the initial state of the system. In turn, if the time evolution is known, its Fourier transform can in principle reveal the eigenenergies. To realize this scheme in reality, we need to choose a group of appropriate initial states and observables. For an operator q̂=∑_j,j^'q_j,j^'|ψ_j⟩⟨ψ_j^'|, its expectation value is ⟨q̂⟩ =⟨Ψ(t)|q̂|Ψ(t)⟩ =∑_j,j^'q_j,j^'c_jc_j^'^∗e^-i(E_j-E_j^')t, which indicates that the Fourier transforms of observable results can only reveal eigenenergy differences E_j-E_j^'. In our qubit model, the Hofstadter Butterfly is presented in the single-excitation subspace {|ψ_n⟩≡|000…1_n…00⟩}, and the associated single-excitation eigenenergies E_j for j=1,2,…,N can be calibrated by selecting the ground state (|ψ_0⟩≡|0⟩ ^⊗ N) energy E_0≡0 as a reference. To do that, the state of the qubit system should have overlaps (c_j, c_j^'^∗) both with the ground state |ψ_0⟩ and single-excitation states |ψ_j⟩, and the selected operator should have corresponding matrix element q_j,0. Therefore, a suitable initial state should be in superposition of |ψ_0⟩ and |ψ_j⟩, and a suitable operator should relate to |ψ_j⟩⟨ψ_0|. Next, we verify the above scheme by numerical simulations based on the original time-dependent Hamiltonian H_o given in Eq. (<ref>). Initially, we prepare the nth qubit in the state (|0⟩ +|1⟩)/√(2) and all other qubits in the state |0⟩, i.e., the system is in the state (|ψ_0⟩ +|ψ_n⟩)/√(2). We simulate the time evolution of the state of the system under the periodical modulation described in Sec. <ref> by considering the qubit dissipation. We also calculate the expectation value of the operator σ_n^-. Although σ_n^- is non-Hermitian and cannot be measured directly, it can be inferred from observables σ_n^x and σ_n^y as ⟨σ_n^x⟩ +i⟨σ_n^y⟩ =2⟨σ_n^-⟩. Typical time evolutions of ⟨σ_n^x⟩ and ⟨σ_n^y⟩ in the simulation are shown in Fig. <ref>(a), and the result of Fourier transform of the corresponding expectation value ⟨σ_n^-⟩ is shown in Fig. <ref>(b). For the simulated system containing N qubits, there should be N single-excitation eigenenergies. From the blue curve in Fig. <ref>(b), we can see part of eigenenergies in the time-domain spectrum. To resolve all the eigenenergies, we vary n from 1 to N, which makes the initial states form a complete basis. Then every eigenstate certainly has some overlap with one of the initial states and hence its corresponding eigenenergy can be detected. To obtain the fractal spectrum versus the magnetic flux, we vary Φ/2π from 0 to 1 with 120 different values. For each Φ/2π value, we place the nth qubit in the state (|0⟩ +|1⟩)/√(2), simulate the evolution of ⟨σ_n^-⟩, and vary n from 1 to N. Using the spectroscopy shown above, we obtain a fractal energy spectrum of the Hofstadter butterfly shown in Fig. <ref>(a), where the qubit number is N=14. Despite there are only 14 sites in the simulated lattice, the overall energy spectrum still has a butterfly-like appearance. To verify the accuracy of the energy spectrum based on Fourier transforms, we diagonalize the 14×14 matrix of the Hamiltonian in Eq. (<ref>), and show the theoretical results in Fig. <ref>(b). The similarity between these two energy spectra indicates that our scheme for simulating Hofstadter butterfly with synthetic gauge fields is effective. Note that there is small difference between Figs. <ref>(a) and <ref>(b), especially in the smallest eigenenergy when Φ=0. The tiny deviations of the Fourier transform spectrum from the theoretical results arise mainly from the large-detuning approximation in obtaining the effective Hamiltonian in Eq. (<ref>). The above numerical simulations, which use the original time-dependent Hamiltonian and consider the qubit dissipation, are performed based on QuTiP <cit.>. Because the Lindblad-master-equation simulation of the 14-qubit system is beyond our computer's memory capability, we use a Monte Carlo approach which is based on the state vector instead of the density matrix. In the simulations, we adopt the experimental achievable parameters of zigzag superconducting-qubit lattices <cit.>. In addition, the similar frequency modulations have already been used in the experiments <cit.>. Therefore, our scheme is experimentally realizable with the existing superconducting-qubit lattices. § CONCLUSION In summary, we propose an experimentally feasible scheme to demonstrate fractal energy spectra on 2D superconducting-qubit lattices with synthetic gauge fields. A generalized Hofstadter model on zigzag lattices is studied by employing the superconducting-qubit lattices previously realized in experiments. The model exhibits a fractal energy spectrum similar to the original Hofstadter butterfly. We design a modulation scheme of qubit frequencies to generate synthetic gauge fields and mimic the generalized Hofstadter Hamiltonian on the zigzag superconducting-qubit lattices. We present a method to detect the fractal energy spectrum from time evolutions of experimental observables. With experimental feasible parameters, our simulation results clearly demonstrate a Hofstadter butterfly. The proposal provides a promising way to study Hofstadter problems on the latest 2D superconducting-qubit lattices. Our work will stimulate the quantum simulation of novel properties induced by magnetic fields in superconducting circuits. This work is supported by the National Natural Science Foundation of China (Grants No. 12204139, No. U20A2076, No. 12204138, No. 12205069, No. 11774076, and No. U21A20436) and the Key-Area Research and Development Program of GuangDong province (Grant No. 2018B030326001). 10 FeynmanR. P. Feynman, Simulating physics with computers, Int. J. Theor. Phys. 21, 467 (1982). RMP2014I. M. Georgescu, S. Ashhab, and F. Nori, Quantum simulation, Rev. Mod. Phys. 86, 153 (2014). Buluta2009I. Buluta and F. Nori, Quantum simulators, Science 326, 108 (2009). Somaroo1999S. Somaroo, C. H. Tseng, T. F. Havel, R. Laflamme, and D. G. Cory, Quantum simulations on a quantum computer, Phys. Rev. Lett. 82, 5381 (1999). Simon2011J. Simon, W. S. Bakr, R. Ma, et al., Quantum simulation of antiferromagnetic spin chains in an optical lattice, Nature 472, 307 (2011). Bloch2012I. Bloch, J. Dalibard, S. Nascimbene, Quantum simulations with ultracold quantum gases, Nat. Phys. 8, 267 (2012). kim2010K. Kim, M. S. Chang, S. Korenblit, et al., Quantum simulation of frustrated Ising spins with trapped ions, Nature, 465, 590 (2010). Hofstadter1976D. R. Hofstadter, Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields, Phys. Rev. B 14, 2239 (1976). Dean2013C. R. Dean, L. Wang, P. Maher, C. Forsythe, F. Ghahari, Y. Gao, J. Katoch, M. Ishigami, P. Moon, M. Koshino, T. Taniguchi, K. Watanabe, K. L. Shepard, J. Hone, and P. Kim, Hofstadter’s butterfly and the fractal quantum Hall effect in moire superlattices, Nature (London) 497, 598 (2013). Ponomarenko2013L. A. Ponomarenko, R. V. Gorbachev, G. L. Yu, D. C. Elias, R. Jalil, A. A. Patel, A. Mishchenko, A. S. Mayorov, C. R. Woods, J. R. Wallbank, M. Mucha-Kruczynski, B. A. Piot, M. Potemski, I. V. Grigorieva, K. S. Novoselov, F. Guinea, V. I. Fal’ko, and A. K. Geim, Cloning of Dirac fermions in graphene superlattices, Nature (London) 497, 594 (2013). Hunt2013B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero, and R. C. Ashoori, Massive Dirac Fermions and Hofstadter Butterfly in a van der Waals Heterostructure, Science 340, 1427 (2013). Wu2021Q. S. Wu, J. Liu, Y. Guan, and O. V. Yazyev, Landau Levels as a Probe for Band Topology in Graphene Moiré Superlattices, Phys. Rev. Lett. 126, 056401 (2021). Rozhkov2016A. V. Rozhkov, A. O. Sboychakov, A. L. Rakhmanov, and F. Nori, Electronic properties of graphene-based bilayer systems, Phys. Rep. 648, 1 (2016). Dalibard2011J. Dalibard, F. Gerbier, G. Juzeliunas, and P. Öhberg, Colloquium: Artificial gauge potentials for neutral atoms, Rev. Mod. Phys. 83, 1523 (2011). Galitaki2013V. Galitaki, I. B. Spielman, Spin-orbit coupling in quantum gases, Nature 494, 49 (2013). Gerbier2010F. Gerbier and J. Dalibard, Gauge fields for ultracold atoms in optical superlattices, New J. Phys. 12, 033007 (2010). Cho2008J. Cho, D. G. Angelakis, and S. Bose, Fractional quantum hall state in coupled cavities, Phys. Rev. Lett. 101, 246809 (2008). Carusotto2011R. O. Umucalılar and I. Carusotto, Artificial gauge field for photons in coupled cavity arrays, Phys. Rev. A 84, 043804 (2011). Roushan2017P. Roushan, C. Neill, A. Megrant, Y. Chen, R. Babbush, R. Barends, B. Campbell, Z. Chen, B. Chiaro, A. Dunsworth, A. Fowler, E. Jeffrey, J. Kelly, E. Lucero, J. Mutus, P. J. J. O'Malley, M. Neeley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. White, E. Kapit, H. Neven, and J. Martinis, Chiral ground-state currents of interacting photons in a synthetic magnetic field, Nat. Phys. 13, 146 (2017). Girvin2010J. Koch, A. A. Houck, K. L. Hur, and S. M. Girvin, Time-reversal-symmetry breaking in circuit-QED-based photon lattices, Phys. Rev. A 82, 043811 (2010). Girvin2011A. Nunnenkamp, J. Koch, and S. M. Girvin, Synthetic gauge fields and homodyne transmission in Jaynes-Cummings lattices, New J. Phys. 13, 095008 (2011). Zoller2013D. Marcos, P. Rabl, E. Rico, and P. Zoller, Superconducting Circuits for Quantum Simulation of Dynamical Gauge Fields, Phys. Rev. Lett. 111, 110504 (2013). HuY2016Y.-P. Wang, W.-L. Yang, Y. Hu, Z.-Y. Xue, and Y. Wu, Detecting topological phases of microwave photons in a circuit quantum electrodynamics lattice, Npj Quantum Information 2, 16015 (2016). Hu2016Z.-H. Yang, Y.-P. Wang, Z.-Y. Xue, W.-L. Yang, Y. Hu, J.-H. Gao, and Y. Wu, Circuit quantum electrodynamics simulator of flat band physics in a Lieb lattice, Phys. Rev. A 93, 062319 (2016). Alaeian2019H. Alaeian, Chung Wai Sandbo Chang, M. V. Moghaddam, C. M. Wilson, E. Solano, and E. Rico, Creating lattice gauge potentials in circuit QED: The bosonic Creutz ladder, Phys. Rev. A 99, 053834 (2019). Zhao2020Y.-J. Zhao, X.-W. Xu, H. Wang, Y.-X. Liu, and W.-M. Liu, Vortex-Meissner phase transition induced by a two-tone-drive-engineered artificial gauge potential in the fermionic ladder constructed by superconducting qubit circuits, Phys. Rev. A 102, 053722 (2020). Guan2020X. Guan, Y.-L. Feng, Z.-Y. Xue, G. Chen, and S.T. Jia, Synthetic gauge field and chiral physics on two-leg superconducting circuits, Phys. Rev. A 102, 032610 (2020). Zoller2003D. Jaksch and P. Zoller, Creation of effective magnetic fields in optical lattices: the Hofstadter butterfly for cold neutral atoms, New J. Phys. 5, 56 (2003). Grab2015T. Graß, C. Muschik, A. Celi, R. W. Chhajlany, and M. Lewenstein, Synthetic magnetic fluxes and topological order in one-dimensional spin systems, Phys. Rev. A 91, 063612 (2015). Banerjee2018R. Banerjee, T. C. H. Liew, and O. Kyriienko, Realization of Hofstadter’s butterfly and a one-way edge mode in a polaritonic system, Phys. Rev. B 98, 075412 (2018). Aidelsburger2013M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes, and I. Bloch, Realization of the Hofstadter Hamiltonian with Ultracold Atoms in Optical Lattices, Phys. Rev. Lett. 111, 185301 (2013). Miyake2013H. Miyake, G. A. Siviloglou, C. J. Kennedy, W. C. Burton, and W. Ketterle, Realizing the Harper Hamiltonian with Laser-Assisted Tunneling in Optical Lattices, Phys. Rev. Lett. 111, 185302 (2013). CQED-ReviewJ. Q. You and F. Nori, Atomic physics and quantum optics using superconducting circuits, Nature (London) 474, 589 (2011). Gu2017X. Gu, A. F. Kockumb, A. Miranowicz, Y.-X. Liu, and F. Nori, Microwave photonics with superconducting quantum circuits, Phys. Rep. 718-719, 1 (2017). CQED-review2J. Clarke and F. K.Wilhelm, Superconducting quantum bits, Nature (London) 453, 1031 (2008). Malley2016P. J. J. O'Malley, R. Babbush, I. D. Kivlichan, J. Romero, J. R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, et al., Scalable Quantum Simulation of Molecular Energies, Phys. Rev. X 6, 031007 (2016). Xu2018K. Xu, J.-J. Chen, Y. Zeng, Y.-R. Zhang, C. Song, W. Liu, Q. Guo, P. Zhang, D. Xu, H. Deng, K. Huang, H. Wang, X. Zhu, D. Zheng, and H. Fan, Emulating Many-Body Localization with a Superconducting Quantum Processor, Phys. Rev. Lett. 120, 050507 (2018). Feng2022W. Feng, G.-Q. Zhang, Q.-P. Su, J.-X. Zhang, and C.-P. Yang, Generation of Greenberger-Horne-Zeilinger States on Two-Dimensional Superconducting-Qubit Lattices via Parallel Multiqubit-Gate Operations, Phys. Rev. Applied 18, 064036 (2022). Su2022Q.-P. Su, Y. Zhang, L. Bin, and C.-P. Yang, Efficient scheme for realizing a multiplex-controlled phase gate with photonic qubits in circuit quantum electrodynamics, Front. Phys. 17, 53505 (2022). LiuTong2022T. Liu, B.-Q. Guo, Y.-H. Zhou, J.-L. Zhao, Y.-L. Fang, Q.-C. Wu, and C.-P. Yang, Transfer of quantum entangled states between superconducting qubits and microwave field qubits, Front. Phys. 17(6), 61502 (2022) Su2021Q.-P. Su, H. Zhang, and C.-P. Yang, Transferring quantum entangled states between multiple single-photon-state qubits and coherent-state qubits in circuit QED, Front. Phys. 16(6), 61501 (2021) LiuTong2020T. Liu, Z. F. Zheng, Y. Zhang, Y. L. Fang, and C. P. Yang, Transferring entangled states of photonic cat-state qubits in circuit QED, Front. Phys. 15(2), 21603 (2020) lattice1F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, et al., Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019). lattice2M. Gong, S. Wang, C. Zha, M.-C. Chen, H.-L. Huang, Y. Wu, Q. Zhu, Y. Zhao, S. Li, S. Guo, et al., Quantum walks on a programmable two-dimensional 62-qubit superconducting processor, Science 372, 948 (2021). lattice3Y. Wu, et al., Strong Quantum Computational Advantage Using a Superconducting Quantum Processor, Phys. Rev. Lett. 127, 180501 (2021). lattice4X. Zhang, W. Jiang, J. Deng, K. Wang, J. Chen, P. Zhang, W. Ren, H. Dong, S. Xu, Y. Gao, F. Jin, X. Zhu, Q. Guo, H. Li, C. Song, A. V. Gorshkov, T. Iadecola, F. Liu, Z.-X. Gong, Z. Wang, D.-L. Deng, and H. Wang, Digital quantum simulation of Floquet symmetry-protected topological phases, Nature 607, 468 (2022). lattice5W. Ren, W. Li, S. Xu, K. Wang, W. Jiang, F. Jin, X. Zhu, J. Chen, Z. Song, P. Zhang, H. Dong, X. Zhang, J. Deng, Y. Gao, C. Zhang, Y. Wu, B. Zhang, Q. Guo, H. Li, Z. Wang, J. Biamonte, C. Song, D.-L. Deng, and H. Wang, Experimental quantum adversarial learning with programmable superconducting qubits, Nat. Comput. Sci. 2, 711 (2022). zigzag1W. Liu, W. Feng, W. Ren, D.-W. Wang, and H. Wang, Synthesizing three-body interaction of spin chirality with superconducting qubits, Appl. Phys. Lett. 116, 114001 (2020). zigzag2W. Ren, W. Liu, C. Song, H. Li, Q. Guo, Z. Wang, D. Zheng, G. S. Agarwal, M. O. Scully, S.-Y. Zhu, et al., Simultaneous Excitation of Two Noninteracting Atoms with Time-Frequency Correlated Photon Pairs in a Superconducting Circuit, Phys. Rev. Lett. 125, 133601 (2020). zigzag3Q. Guo, C. Cheng, H. Li, S. Xu, P. Zhang, Z. Wang, C. Song, W. Liu, W. Ren, H. Dong, R. Mondaini, and H. Wang, Stark Many-Body Localization on a Superconducting Quantum Processor, Phys. Rev. Lett. 127, 240502 (2021). zhu2019Y. Ye, Z. Y. Ge, Y. Wu, S. Wang, M. Gong, Y. R. Zhang, Q. Zhu, R. Yang, S. Li, F. Liang, et al., Propagation and Localization of Collective Excitations on a 24-Qubit Superconducting Processor, Phys. Rev. Lett. 123, 050502 (2019). Kollar2019A. J. Kollár, M. Fitzpatrick, and A. A. Houck, Hyperbolic lattices in circuit quantum electrodynamics, Nature (London) 571, 45 (2019). Harper1955P. G. Harper, Single Band Motion of Conduction Electrons in a Uniform Magnetic Field, Proc. Phys. Soc. 68, 874 (1955). Das2019K. K. Das and J. Christ, Realizing the Harper model with ultracold atoms in a ring lattice, Phys. Rev. A 99, 013604 (2019). Martinis2017scienceP. Roushan, C. Neill, J. Tangpanitanon, V. M. Bastidas, A. Megrant, R. Barends, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. Fowler, B. Foxen, M. Giustina, E. Jeffrey, J. Kelly, E. Lucero, J.Mutus,M. Neeley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. White, H. Neven, D. G. Angelakis, and J. Martinis, Spectroscopic signatures of localization with interacting photons in superconducting qubits, Science 358, 1175 (2017). Blais2012F. Beaudoin, M. P. da Silva, Z. Dutton, and A. Blais, First-order sidebands in circuit QED using qubit frequency modulation, Phys. Rev. A 86, 022305 (2012). Blais2013J. D. Strand, M. Ware, F. Beaudoin, T. A. Ohki, B. R. Johnson, A. Blais, and B. L. T. Plourde, First-order sideband transitions with flux-driven asymmetric transmon qubits, Phys. Rev. B 87, 220505(R) (2013). sunluyan2018statetransferX. Li, Y. Ma, J. Han, T. Chen, Y. Xu, W. Cai, H. Wang, Y. P. Song, Z.-Y. Xue, Z.-Q. Yin, and L. Sun, Perfect Quantum State Transfer in a Superconducting Qubit Chain with Parametrically Tunable Couplings, Phys. Rev. Appl. 10, 054009 (2018). Spectroscopy2014C. Senko, J. Smith, P. Richerme, A. Lee, W. C. Campbell, and C. Monroe, Coherent imaging spectroscopy of a quantum many-body spin system, Science 345, 430 (2014). Spectroscopy2015P. Jurcevic, P. Hauke, C. Maier, C. Hempel, B. P. Lanyon, R. Blatt, and C. F. Roos, Spectroscopy of Interacting Quasiparticles in Trapped Ions, Phys. Rev. Lett. 115, 100501 (2015). qutipJ. R. Johansson, P. D. Nation, and F. Nori, QuTiP: An open-source Python framework for the dynamics of open quantum systems, Comput. Phys. Commun. 183, 1760 (2012). qutip2J. R. Johansson, P. D. Nation, and F. Nori, QuTiP 2: A Python framework for the dynamics of open quantum systems, Comput. Phys. Commun. 184, 1234 (2013). Feng2019D.-W. Wang, C. Song, W. Feng, H. Cai, D. Xu, H. Deng, H. Li, D. Zheng, X. Zhu, H. Wang, S.-Y. Zhu, and M. O. Scully, Synthesis of antisymmetric spin exchange interaction and chiral spin clusters in superconducting circuits, Nat. Phys. 15, 382 (2019).
http://arxiv.org/abs/2306.02527v1
20230605012349
Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking
[ "Frederik Kunstner", "Victor S. Portella", "Mark Schmidt", "Nick Harvey" ]
math.OC
[ "math.OC", "cs.LG" ]
Using machine learning to find exact analytic solutions to analytically posed physics problems Sahel Ashhab July 31, 2023 ============================================================================================== The backtracking line-search is an effective technique to automatically tune the step-size in smooth optimization. It guarantees similar performance to using the theoretically optimal step-size. Many approaches have been developed to instead tune per-coordinate step-sizes, also known as diagonal preconditioners, but none of the existing methods are provably competitive with the optimal per-coordinate step-sizes. We propose multidimensional backtracking, an extension of the backtracking line-search to find good diagonal preconditioners for smooth convex problems. Our key insight is that the gradient with respect to the step-sizes, also known as hypergradients, yields separating hyperplanes that let us search for good preconditioners using cutting-plane methods. As black-box cutting-plane approaches like the ellipsoid method are computationally prohibitive, we develop an efficient algorithm tailored to our setting. Multidimensional backtracking is provably competitive with the best diagonal preconditioner and requires no manual tuning. When training machine learning models, tuning the hyperparameters of the optimizer is often a major pain point. For example, finding a reasonable step-size hyperparameter for gradient descent typically involves trial-and-error or a costly grid search. However, only so much improvement can be achieved by tuning the step-size. For ill-conditioned problems, per-coordinate step-sizes—or diagonal preconditioning—can drastically improve convergence. This has motivated a wide variety of approaches specific to machine learning problems to adaptively find per-coordinate step-sizes. However, the community lacks clear definitions of what adaptive step-sizes are. The most well-known definition of adaptivity comes from online learning, where methods such as AdaGrad adapt to problem specific-constants without user input while maintaining strong guarantees, even in the adversarial setting. However, this resilience to adversaries is a double-edged sword. To satisfy this definition of adaptivity, AdaGrad monotonically decreases its step-sizes. As a result, it performs poorly on non-adversarial problems, and many follow-up methods have focused on working around this decreasing property. Methods commonly used in deep learning, such as RMSProp and Adam, are often motivated by analogy to AdaGrad, but without decreasing step-sizes. This change is crucial for their practical performance but nullifies their adaptivity guarantees, indicating that the online-learning definition of adaptivity might not capture what we want it to. Alternative approaches to tune per-coordinate step-sizes during the course of optimization, such as adaptive gain and hypergradient methods, do not have a formal definition of what they are aiming to achieve, and are instead motivated from intuition alone. While showing promising practical performance in some settings, hypergradient methods are often unstable and can require as much babysitting as the original optimizer they are tuning. The lack of a well-defined objective makes comparison of those methods purely empirical, and the field lacks direction on how to improve on the state-of-the-art. However, there is an alternative definition of adaptivity in smooth optimization, where the standard approach to tuning the step-size is to do a backtracking line-search. Applied to gradient descent, this line-search guarantees that the step-size is within a constant factor of the best theoretical step-size. But this method is often overlooked as it only capture adaptivity of a single step-size. Contribution. We propose multidimensional backtracking, a method analogous to a backtracking line-search that automatically finds per-coordinate step-sizes while running gradient descent. The main difficulty in extending the line-search to higher dimensions is that the signal used to search for a good scalar step-size, that the step-size is “too big”, is insufficient to efficiently search over per-coordinate step-sizes. Our key insight is that the gradient with respect to the step-sizes can be used in conjunction with a cutting-plane method to make this search feasible in high dimensions, and we develop a cutting-plane method tailored to the problem with minimal overhead. We show that our method has a similar guarantee as line-search, in that its convergence rate is within a O(1/√(d)) factor of preconditioned gradient descent with the optimal, but unknown, diagonal preconditioner. § INTRODUCTION When training machine learning models, tuning the hyperparameters of the optimizer is often a major challenge. For example, finding a reasonable step-size hyperparameter for gradient descent typically involves trial-and-error or a costly grid search. In smooth optimization, a common approach to set the step-size without user input is a backtracking line-search: start with a large step-size, and decrease it when it is too big to make sufficient progress. For ill-conditioned problems, however, there are limits to the improvement achievable by tuning the step-size. Per-coordinate step-sizes —also known as diagonal preconditioners— can drastically improve performance. Many approaches have been developed to automatically tune per-coordinate step-sizes. Those are often described as “adaptive” methods, but the meaning of this term varies widely, from describing heuristics that set per-coordinate step-sizes, to ensuring performance guarantees as if a particular property of the problem were known in advance. Yet, even on the simplest case of a smooth and strongly convex deterministic problem where a good fixed diagonal preconditioner exists (i.e., one that reduces the condition number), none of the existing adaptive methods are guaranteed to find per-coordinate step-sizes that improve the convergence rate. We discuss approaches to adaptive methods in the next section. Contribution. We propose multidimensional backtracking, an extension of the standard backtracking line-search to higher dimension, to automatically find good per-coordinate step-sizes. Our method recovers the convergence rate of gradient descent with the optimal preconditioner for the problem, up to a √(2d) factor where d is the number of coordinates. This is a direct generalization of the line-search guarantee, with a penalty depending on dimension due to the extra degrees of freedom, as expected. The main difficulty in extending the line-search to higher dimensions is that when searching for a scalar step-size, all we can check is whether the step-size is “too big”. This is insufficient to efficiently search over per-coordinate step-sizes. Our key insight is that the gradient with respect to the step-sizes can be used in conjunction with a cutting-plane method to make this search feasible in high dimensions. §.§ Adaptive step-sizes and preconditioning methods Adaptive and parameter-free methods in online learning are an example where adaptive methods have a well-defined meaning. AdaGrad <cit.> and Coin Betting <cit.> can adapt to problem-specific constants without user input and have strong guarantees, even in the adversarial setting. However, this resilience to adversaries is a double-edged sword; to satisfy this definition of adaptivity, AdaGrad uses monotonically decreasing step-sizes. While AdaGrad still converges at the desired asymptotic rate on smooth, Lipschitz functions <cit.>, its performance can be worse than plain gradient descent. This motivated investigations of workarounds to avoid the monotonically decreasing updates, including augmenting the update with an increasing step-size schedule <cit.>, a line-search <cit.>, or modifying the update to the preconditioner <cit.>. Methods commonly used in deep learning, such as RMSProp and Adam <cit.>, are often motivated as adaptive by analogy to AdaGrad, but without decreasing step-sizes <cit.>. This change is crucial for their practical performance, but nullifies their online-learning adaptivity guarantees. Adaptive gain and hypergradient heuristics. Many heuristics that tune the hyperparameters of the optimization procedure use the gradient with respect to the hyperparameters, or hypergradients <cit.>. Methods have been proposed to tune the step-size <cit.>, a preconditioner <cit.>, any hyperparameter <cit.>, or to maintain a model of the objective <cit.>. “Stacking” such optimizers recursively has been shown to reduce the dependency on user-specified hyperparameters in practice <cit.>. This idea pre-dates the hypergradient nomenclature; <cit.> presents a method to update the step-size based on the sign of successive gradients, and <cit.> presents a control perspective for per-coordinate step-sizes, which can be cast as a hypergradient update to a diagonal preconditioner.[ The hypergradient with respect to a diagonal preconditioner =() is, by the chain rule, the element-wise product (⊙) of subsequent gradients, - ∇_ f( - () ∇ f()) = ∇ f() ⊙∇ f( - () ∇ f()). ] This approach has led to adaptive gain methods such as Delta-Delta and variants <cit.>, and further developed using the sign of the hypergradient <cit.>, full-matrix updates <cit.>, a larger history <cit.>, updates in log-space <cit.>, heuristics to adjust the outer step-size <cit.>, or multiplicative weight updates <cit.>. While showing promising practical performance in some settings, existing methods are often motivated from intuition rather than a formal definition of adaptivity, giving no guarantee that the tuned method will converge faster, if at all. Indeed, hypergradient methods are often unstable, and may require as much manual tuning as the original optimizer they are intended to tune. Second-order methods. A classical approach to preconditioning is to use second-order information, as in Newton's method or its regularized variants <cit.>. To avoid the load of computing and inverting the Hessian, quasi-Newton methods <cit.> such as L-BFGS <cit.> fit an approximate Hessian using the secant equation. Variants using diagonal approximations have also been proposed, framed as Quasi-Cauchy, diagonal BFGS, or diagonal Barzilai-Borwein methods  <cit.>, while other methods use the diagonal of the Hessian <cit.>. Some second-order and quasi-Newton methods converge super linearly (although not the diagonal or limited memory variants used in practice), but those guarantees only hold locally when close to the minimum. To work when far from a solution, those methods require “globalization” modifications, such as regularization or a line-search. Unfortunately, analyses of second-order methods do not capture the global benefit of preconditioning and instead lead to worse rates than gradient descent, as in the results of <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, or <cit.>. Line-searches. Adaptivity in smooth optimization is most closely related to line-searches. The standard guarantee for gradient descent on a L-smooth function requires a step-size of 1/L, but L is typically unknown. The backtracking line-search based on the Armijo condition <cit.> approximately recovers this convergence guarantee by starting with a large step-size, and backtracking; halving the step-size whenever it does not yield sufficient improvement. However, line-searches are often overlooked in the discussion of adaptive methods, as they do not provide a way to set more than a scalar step-size. While line-searches can be shown to work in the stochastic overparameterized setting and have been applied to train neural networks <cit.>, improvements beyond backtracking have been limited. Additional conditions <cit.>, non-monotone relaxations <cit.>, or solving the line-search to higher precision <cit.> can improve the performance in practice, but even an exact line-search cannot improve the convergence rate beyond what is achievable with a fixed step-size <cit.>. §.§ Summary of main result: adaptivity to the optimal preconditioner Our approach is inspired by the work discussed above, but addresses the following key limitation: none of the existing methods attain better global convergence rates than a backtracking line-search. Moreover, this holds even on smooth convex problems for which a good preconditioner exists. We generalize the backtracking line-search to handle per-coordinate step-sizes and find a good preconditioner. As in quasi-Newton methods, we build a preconditioner based on first-order information. However, instead of trying to approximate the Hessian using past gradients, our method searches for a preconditioner that minimizes the objective function at the next step. Our convergence result depends on the best rate achievable by an optimal diagonal preconditioner, similarly to how methods in online learning are competitive against the best preconditioner in hindsight. However, our notion of optimality is tailored to smooth strongly-convex problems and does not require decreasing step-sizes as in AdaGrad. Our update to the preconditioner can be interpreted as a hypergradient method, but instead of a heuristic update, we develop a cutting-plane method that uses hypergradients to guarantee a good diagonal preconditioner. Our main theoretical contribution is summarized below. On a smooth, strongly-convex function f in d dimensions, steps accepted by multidimensional backtracking guarantee the following progress f(_t+1) - f(_*) ≤1-1/√(2d)1/κ_* (f(_t) - f(_*)), where κ_* is the condition number achieved by the optimal preconditioner defined in <Ref>. The number of backtracking steps is at most linear in d and logarithmic in problem-specific constants. Multidimensional backtracking finds per-coordinate step-sizes that lead to a provable improvement over gradient descent on badly conditioned problems that can be improved by diagonal preconditioning, i.e., if the condition number of f is at least √(2d)·κ_*. Moreover, this guarantee is worst-case, and multidimensional backtracking can outperform the globally optimal preconditioner by finding a better local preconditioner, as illustrated on an ill-conditioned linear regression problem in <Ref>. To find a competitive diagonal preconditioner, we view backtracking line-search as a cutting-plane method and generalize it to higher dimensions in <Ref>. In <Ref> we show how to use hypergradients to find separating hyperplanes in the space of preconditioners, and in <Ref> we develop an efficient cutting-plane methods tailored to the problem. In <Ref>, we illustrate the method through preliminary experiments and show it has consistent performance across problems. Notation. We use standard font weight d, n, α for scalars, bold , for vectors, and capital bold , for matrices. We use [i] for the i-th entry of , ⊙ for element-wise multiplication, and ^2 for ⊙. ⊙ for element-wise multiplication, and ^2 for ⊙. We use = () to denote the diagonal matrix with diagonal , and = () to denote the vector of diagonal entries of . We say is larger than , ≽, if - is positive semidefinite. If = (), = (), the ordering ≽ is equivalent to [i] ≥[i] for all i, which we write ≥. We use for the identity matrix and for the all-ones vector. § OPTIMAL PRECONDITIONING AND SUFFICIENT PROGRESS Consider a twice-differentiable function f ^d→ that is L-smooth and μ-strongly convex,[ While we use strong-convexity and twice-differentiability of f to define the optimal preconditioner, those assumptions can be relaxed to only rely on the PL inequality <cit.> (see <ref>).] i.e., μ1/2- ^2 ≤f() - f() - ∇f(), - ≤L 1/2- ^2, for all , , or μ≼∇^2 f() ≼ L for all . We measure the quality of a preconditioner by how tightly it approximates (∇^2 f())^-1. We define an optimal diagonal preconditioner for f as ∈_≽0, diagonal κ such that 1/κ^-1 ≼∇^2 f()≼^-1 for all , and denote by κ_* the optimal κ above. <ref> is equivalent to minimizing κ(^1/2∇^2f()^1/2), a known measure of the convergence rate of preconditioned methods <cit.>, and reduces to the definition of optimal preconditioning for linear systems <cit.> when f is quadratic. Alternatively, the optimal preconditioner can be viewed as the matrix such that f is 1-smooth and maximally strongly-convex in the norm *^2_^-1 = ,^-1, 1/κ_* 1/2- ^2_^-1 ≤f() - f() - ∇f(), - ≤1/2- ^2_^-1, for all , . Similar definitions of smoothness and strong-convexity relative to a matrix are common in coordinate descent methods <cit.>, where the matrices are assumed to be known a priori. If we knew , preconditioned gradient descent using would converge at the rate fx - ∇f() - f(_*) ≤1 - 1/κ_* (f() - f(_*)), where _* minimizes f. We do not know and will be searching for a good approximation. For the standard backtracking line-search on L-smooth functions, the goal is to find a step-size that works as well as 1/L without knowledge of L. To do so, we can start with a large step-size α≫1/L and check the Armijo condition: the step-size α makes progress as if f were 1/α-smooth, that is, f(- α∇f()) ≤ f() - α1/2*∇f()^2. If the condition is satisfied, we take the step - α∇ f(). By the descent lemma, <cit.>, the condition is satisfied if α≤1/L. So if the condition fails, we know α is too large and can decrease α. For diagonal preconditioners, the Armijo condition checks whether the preconditioner makes sufficient progress in the norm induced by ^-1, as if f were 1-smooth in <ref>, that is, f(- ∇f()) ≤f() - 1/2*∇f()^2_. As with a scalar step-size, sufficient progress holds for any matrix that satisfies ∇^2 f() ≼^-1. § MULTIDIMENSIONAL BACKTRACKING The typical presentation of the backtracking line-search maintains a step-size and decreases it when the Armijo condition fails <cit.>. We instead take the following non-standard view, which generalizes more naturally to high dimension; as maintaining a set containing the optimal step-size, and using bisection to narrow down the size of the set. Starting with an interval = [0, α_max] containing 1/L, we pick a candidate step-size α by “backtracking” by γ < 1 from the largest step-size in , taking α = γα_max to balance two properties; * Large progress: If the candidate step-size satisfies the Armijo condition and the step is accepted, the value of f decreases proportionally to α as in (<ref>). To maximize the progress, γ should be large. * Volume shrinkage: If the candidate step-size fails the Armijo condition, we learn that α > 1/L and can cut the interval to ' = [0, γα_max]. To ensure the interval shrinks fast, γ should be small. Taking γ = 1/2 balances both properties; α is at least 1/2 as large as any step-size in , and we can halve the interval if the Armijo condition fails. We do not use α_max as a candidate since, although the largest in , it would give no information to update the interval in case it failed the Armijo condition. For multidimensional backtracking, we can check whether a candidate preconditioner yields sufficient progress with <ref> instead of the Armijo condition, and replace the intervals by sets of diagonal preconditioners. The high-level pseudocode is given in <ref>, where each iteration either leads to an improvement in function value or shrinks the sets of potential step-sizes/preconditioners. To complete the algorithm, we need to define the steps marked as (†) to select preconditioners that lead to large progress when the step is accepted, while significantly reducing the search space when the preconditioner does not yield . For computational efficiency, we want methods that take O(d) time and memory like plain gradient descent. §.§ Guaranteed progress competitive with the optimal preconditioner We start by formalizing the progress guarantee. If satisfies the Armijo condition (<ref>) at _t, the function value decreases by at least *∇ f(_t)_^2. If we can guarantee that *∇ f(_t)_^2 ≥γ*∇ f(_t)_^2 for some γ > 0, we can recover the convergence rate of gradient descent preconditioned with up to a factor of γ. However, we do not know , but know a set _t that contains preconditioners we have not yet ruled out, including . To guarantee that is competitive with , we can enforce that is competitive with all the preconditioners in _t, as captured by the following definition. A matrix ∈_t is γ-competitive in _t, for a gradient ∇ f(_t), if *∇ f(_t)_^2 ≥γ*∇ f(_t)_^2 for any ∈_t. If is γ-competitive, then it is competitive with as max_∈_t*∇ f(_t)_^2 ≥*∇ f(_t)_^2. However, this is a strong requirement. To illustrate what competitive ratios are attainable, we show in <ref> that even the optimal preconditioner might only be 1/d-competitive, as other preconditioners can lead to more local progress depending on ∇ f(_t), whereas is a fixed global optimal preconditioner. This also suggests that selecting a preconditioner that guarantees more local progress may lead to better performance, which we take advantage of to ensure a γ = 1/√(2d) competitive ratio. To see how to ensure a competitive ratio, consider the case where contains diagonal preconditioners whose diagonals come from the box () ∈^d≤. To select a candidate preconditioner that is γ-competitive in , we can backtrack from the largest vector in () by some constant γ < 1, and take = γ(). While a large γ leads to more progress when the step is accepted, we will see that we need a small γ to ensure the volume shrinks when the step is rejected. We can obtain the convergence rate of <Ref> depending on γ and the optimal preconditioned condition number κ_* if we ensure ∈_t and that is γ-competitive for all t. Let , κ_* be an optimal preconditioner and condition number for f (<ref>). If the set _t from the algorithm in <Ref> contains , and ∈_t is γ-competitive (<Ref>), then f(_t+1) - f(_*) ≤1 - γ/κ_* f(_t) - f(_*) whenever the candidate step leads to sufficient progress and is accepted. The proof relies on three inequalities. (1) The iterate _t+1 yields sufficient progress (Eq. <ref>), (2) any accepted preconditioner is γ-competitive in _t and thus with , and (3) f is 1/κ_*-strongly convex in ·_^-1, which implies κ_* 1/2*∇ f(_t)^2_≥ f(_t) - f(_*). Combining those yields f(_t+1) (1)≤ f(_t) - 1/2∇f(_t)_^2 (2)≤ f(_t) - γ1/2∇f(_t)_^2 (3)≤ f(_t) - γ/κ_* f(_t) - f(_*). Subtracting f(_*) on both sides yields the contraction guarantee. § SEPARATING HYPERPLANES IN HIGHER DIMENSIONS In one dimension, if the step-size α does not satisfy the sufficient progress condition (<ref>), we know α > 1/L and can rule out any α' ≥α. We are looking for a generalization to higher dimensions: if the queried preconditioner fails the condition, we should be able to discard all larger preconditioners. The notion of valid preconditioners formalizes this idea. A preconditioner is valid if ^1/2∇^2 f()^1/2≼ for all , which guarantees that satisfies the condition, and invalid otherwise. Validity is a global property: a preconditioner might lead to sufficient progress locally but still be invalid. Using the partial order, if is invalid then any preconditioner ' ≽ is also invalid. However, this property alone only discards an exceedingly small portion of the feasible region in high dimensions. Consider the example illustrated in <Ref>: if the diagonals are in a box (), the fraction of volume discarded in this way if (1/2)() is invalid is only 1/2^d. To efficiently search for valid preconditioners, we show that if f is convex, then the gradient of the condition gives a separating hyperplane for valid preconditioners. That is, it gives a vector ∈^d such that if ∈^d satisfies , > 1, then () is invalid, as illustrated in <Ref>. We use the following notation to denote normalized half-spaces: _>() ∈^d , > 1 and _≤() {∈^d : , ≤1}. Suppose =() ≻ 0 does not lead to at , and let h() be the gap in the sufficient progress condition, h() f(- ∇f()) - f() + 12∇f()_^2 > 0. Then () for any in the following half-space satisfies h() > 0 and is also , ∈^d ∇h(), > ∇h(), - h(), This half-space is equal to _>() with given by = ∇ h()/∇ h(), - h(), or (1/2- ^+) ⊙/ f() - ^+, - f(^+) , with { ^+ - ∇f(), (, ^+) (∇f(), ∇f(^+)). . If f is convex, then h also is. Convexity guarantees that h() ≥ h() + ∇ h(), - for any . A sufficient condition for h() > 0, which means is invalid, is whether h() + ∇ h(), - > 0 holds. Reorganizing yields <ref>, and <ref> expresses the half-space in normalized form, _>(), expanding h in terms of f, its gradients, and . The half-space in <ref> is however insufficient to find good enough cutting-planes, as it uses convexity to invalidate preconditioners but ignores the ordering that if is , any ' ≽ is also . If such preconditioners are not already ruled out by convexity, we can find a stronger half-space by removing them, as illustrated in <Ref>. We defer proofs to <ref>. If _>() is a half-space given by <ref>, then _>() where max{, 0} element-wise is a stronger half-space in the sense that _>() ⊆_>(), and _>() contains only invalid preconditioners. § CUTTING-PLANE METHODS The multidimensional backtracking method is in fact a cutting-plane method that uses separating hyperplanes (from <Ref>) to search for valid preconditioners. The canonical example is the ellipsoid method <cit.>, but its computational cost is Ω(d^2) in ℝ^d. We now describe cutting-plane methods with three desirable properties: the preconditioners have good competitive ratios, the feasible set shrinks significantly when backtracking, and the computational cost is O(d). There are many details, but the overall idea is similar to the ellipsoid method. A simple warm-up: boxes. Consider the case when _0 consists of diagonal matrices with diagonals in the box (_0) = ∈^d≤_0. We pick a candidate preconditioner by backtracking from the largest point in (_0) by some constant γ < 1, taking γ(_0). If satisfies the Armijo condition (<ref>), we take a gradient step. If it does not, we compute the vector _0 as in <ref>, and obtain a half-space _>(_0) that contains only invalid preconditioners. We then know we only need to search inside _0 ∩_≤(_0). However, maintaining the set _0 ∩_≤(_0) ∩⋯∩_≤(_t) would be too complex to fit in O(d) time or memory. To reduce complexity, we define _t+1 as the box (_t+1) of minimum volume containing (_t) ∩_≤(_t), as illustrated in <ref>. Due to this restriction, we might not be able to find a smaller set; the original box (_t) may already be the minimum volume box containing (_t) ∩_≤(_t) if _t does not cut deep enough, as illustrated in <ref>. However, with enough backtracking (γ < 1/d), we can show that the new box is smaller. This yields the following subroutines to fill in the gaps of <ref> (detailed in <ref>) (_t, γ, _t) γ(_t), (_t, ) ()_t ∈(_t+1), where _t = (_t) and _t+1min_t, 1/_t element-wise, which give the following guarantees. Consider the multidimensional backtracking from <Ref> initialized with a set _0 = ()∈(_0) containing , with the subroutines in <ref> with γ = 1/2d. Then: (a) ∈_t, (b) the candidate preconditioner is 1/2d-competitive in _t for any t, and (c) ((_t+1)) ≤1/d+1 ((_t)) when fails <ref>. In particular, is not called more than d log_d+1(L _0_∞) times. To guarantee that the box shrinks, we have to guarantee that the half-space _≤(_t) cuts deep enough. We know that the half-space has to exclude the query point , i.e. _t, _t≥ 1, by <ref> and that _t ≥ 0 by <ref>. Querying sufficiently close to the origin, by taking γ = 1/2d, is then enough to guarantee the decrease. To bound the total number of cuts, we note that the sets (_t) have a minimum volume _min, as they have to contain the valid preconditioners. The number of cuts is at most log_c((b_0))/_min for c = d+1. We then bound ((b_0)) ≤*_0_∞^d and _min≥1/L^d as (1/L) is a valid preconditioner. §.§ Multidimensional Backtracking with Centered Axis-aligned Ellipsoids We now improve the competitive ratio from O(1/d) to O(1/√(d)) by switching from boxes to ellipsoids. Whereas general ellipsoids would require Ω(d^2) complexity (as they involve a d × d matrix), we consider centered, axis-aligned ellipsoids, defined by a diagonal matrix = (), of the form () {∈^d : *_≤ 1}, where *_^2 = ,. As preconditioners are non-negative, we consider only the positive orthant of the ellipsoid. For simplicity, we refer to those sets as ellipsoids. Candidate preconditioner. In the box example, we selected the candidate preconditioner by backtracking from the largest preconditioner in the box. With an ellipsoid, there is no largest preconditioner. We need to choose where to backtrack from. To ensure the candidate preconditioner is competitive (<Ref>), we backtrack from the preconditioner that maximizes the progress *∇ f()_^2, _∈() ∇f()_^2 = ^-1 ∇f()^2/∇f()^2_^-1, where ∇f()^2 ∇f() ⊙∇f(). This lets us pick the preconditioner that makes the most progress for the current gradient, and will let us improve the competitive ratio by allowing a backtracking coefficient of 1/√(d) instead of 1/d. Cutting. To complete the algorithm, we need to find a new set (_t+1) with smaller volume which contains the intersection of the previous set (_t) and the half-space _≤(_t). Unlike the box approach, the minimum volume ellipsoid has no closed form solution. However, if we backtrack sufficiently, by a factor of γ < 1/√(d), we can find an ellipsoid guaranteed to decrease the volume. Consider the ellipsoid () defined by = () for ∈^d. Let ∈() be a point sufficiently deep inside the ellipsoid, such that _≤1/√(2d), and _>() be a half-space obtained from <ref> at . The intersection () ∩()_≤ is contained in the new ellipsoid (^+(, )), where ^+(, ) = λ+ (1 - λ) ^2, λ= ℓ/d d-1/ℓ-1, ℓ= _^-1^2, which has a smaller volume, ((^+(, )) ≤ c (()), where c = √(e)/√(2)≈ 0.91. The new ellipsoid in (<ref>) is a convex combination between () and the minimum volume ellipsoid containing the set ∈^d, ||≤ 1 where || is the element-wise absolute value of . The choice of λ in (<ref>) is not optimal, but suffices to guarantee progress as long as _ is small. A similar approach was used by <cit.> to approximate submodular functions, although they consider the polar problem of finding a maximum-volume enclosed ellipsoid. The full proof and discussion on the connections to the polar problem are deferred to <Ref>. To improve the cuts, we can refine the estimate of λ in <ref> by minimizing the volume numerically. We include this modification, detailed in <ref>, in our experiments in <ref>. Overall guarantees. We can now define the two subroutines for the ellipsoid method, and obtain the main result that we stated informally in <Ref>, by combining the guarantees of the ellipsoid approach with the convergence result of <ref>. Consider the multidimensional backtracking from <Ref> initialized with the set _0 = ()∈(_0) containing , given by some scaling α_0 > 0 of the uniform vector, _0 = α_0. For _t, let _t = (_t). Define the subroutines = (_t, γ, _t) γ_t^-1 ∇f(_t)^2∇f(_t)^2__t^-1, (_t, ) ()∈(^+(_t, _t)), where _t is the vector given by <Ref> when fails the Armijo condition at _t, and ^+ is computed as in (<ref>). If γ = 1/√(2d), then: (a) ∈_t for all t, (b) the candidate preconditioners are 1/√(2d)-competitive in _t, and (c) is called no more than 12 d log(L/α_0) times. § EXPERIMENTS To illustrate that multidimensional backtracking finds good preconditioners and improves over gradient descent on ill-conditioned problems even when accounting for the cost of backtracking, we run experiments on small but very ill-conditioned and large (d≈ 10^6) problems. As examples of adaptive gain and hypergradient methods, we include RPROP <cit.> and GD with a hypergradient-tuned step-size (GD-HD, with the multiplicative update). As examples of approximate second-order methods, we take diagonal BB <cit.> and preconditioned GD using the diagonal of the Hessian. We use default parameters, except for the hypergradient method GD-HD, where we use 10^-10 as the initial step-size instead of 10^-3 to avoid immediate divergence. We include AdaGrad (diagonal), but augment it with a line-search as suggested by <cit.>, to make it competitive in the deterministic setting. Line-searches and forward steps. For all methods that use a line-search, we include a forward step, a common heuristic in line-search procedures to allow for larger step-sizes when possible, although it can increase the number of backtracking steps. When a step-size or preconditioner is accepted, we increase the size of the set, allowing for larger (scalar or per-coordinate) step-sizes by a factor of 1.1. We measure performance per function and gradient evaluations to capture the cost of backtracking. On a small but extremely ill-conditioned problems, our method is the only one that gets remotely close to being competitive with preconditioning with the diagonal Hessian—while only using first-order information. The diagonal Hessian is very close to the optimal preconditioner for those problems. On the cpusmall dataset, it reduces the condition number from κ≈ 5· 10^13 to ≈ 300, while κ_* ≈ 150. All other methods struggle to make progress and stall before a reasonable solution is achieved, indicating they are not competitive with the optimal preconditioner. On large regularized logistic regression on News20 (d ≈ 10^6), gradient descent performs relatively better, suggesting the problem is less ill-conditioned to begin with (the regularized data matrix has condition number κ≈ 10^4). Despite the bound of O(d) backtracking steps, our methods finds a reasonable preconditioner within 100 gradient evaluations. Despite the high dimensionality, it improves over gradient descent when measured in number of oracle calls. Using plain gradient updates on the hyperparameters in GD-HD leads to unstable behavior, but diagonal BB and even RPROP, perform remarkably well on some problems —even outperforming preconditioning with the diagonal Hessian, which uses second-order information. However, they fail on other ill-conditioned problems, even when a good diagonal preconditioner exists. This pattern holds across other problems, as shown in <ref>. Multidimensional backtracking demonstrates robust performance across problems, a clear advantage of having worst-case guarantees. § CONCLUSION We designed multidimensional backtracking, an efficient algorithm to automatically find diagonal preconditioners that are competitive with the optimal diagonal preconditioner. Our work provides a definition of adaptive step-sizes that is complementary to the online learning definition. While online learning focuses on the adversarial or highly stochastic setting, we define and show how to find optimal per-coordinate step-sizes in the deterministic smooth convex setting. We show it is possible to build provably robust methods to tune a preconditioner using hypergradients. While our specific implementation uses cutting-planes, the general approach may lead to alternative algorithms, that possibly tune other hyperparameters, with similar guarantees. The main limitation of our approach is its reliance on the convex deterministic setting. The results might transfer to the stochastic overparametrized regime using the approach of <cit.>, but the non-convex case seems challenging. It is not clear how to get reliable information from a cutting-plane perspective using hypergradients without convexity. As the first method to provably find competitive preconditioners, there are likely modifications that lead to practical improvements while preserving the theoretical guarantees. Possible ideas to improve practical performances include better ways to perform forward steps, using hypergradient information from accepted steps (which are currently ignored), or considering alternative structures to diagonal preconditioners. We thank Aaron Mishkin for helpful discussions in the early stages of this work, and Curtis Fox and Si Yi (Cathy) Meng for providing comments on an early version of the manuscript. This research was partially supported by the Canada CIFAR AI Chair Program, the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants RGPIN-2022-03669, § REFERENCES [heading=none] height 4pt 0.25in - Supplementary Material 0.29in -height 1pt 0.09in Appendix Code available at https://github.com/fKunstner/multidimensional-backtrackinghttps://github.com/fKunstner/multidimensional-backtracking § FULL PSEUDOCODE OF THE ALGORITHMS We first give with a generic version using the subroutines , and , to be specialized for the backtracking line-search (<ref>), multidimensional backtracking using boxes (<ref>), and ellipsoids (<ref>). The generic pseudocode is written in terms of preconditioners, but also applies to the step-size version, which we can consider as looking for a preconditioner constrained to isotropic diagonal preconditioners, that is, preconditioners in the set {α : α∈}. Although we write the pseudocode maintaining at each iteration an abstract set of preconditioners , the only information the algorithm needs to maintain on each iteration for the implementation in the different cases is * For the line-search: the current maximum step-size α_max defining the interval of valid step-sizes, [0,α_max] such that the set of preconditioners is = {α : α∈ [0,α_max]}; * For multidimensional backtracking with boxes: the vector defining the maximum corner of the box () = {∈^d : ≤} used to define the candidate diagonals preconditioners in the set = {() : ∈()}; * For multidimensional backtracking with ellipsoids: the vector defining the axis-aligned ellipsoid () = {∈^d : , () ≤ 1} used to define the candidate diagonal preconditioners in the set = {() : ∈()}. The pseudocode in <ref> updates (_t, _t) to (_t+1, _t+1) at each iteration, and ensures that either the function value decreases, f(_t+1) < f(_t), or the volume decreases, (_t+1) < (_t). We give an alternative pseudocode in <ref>, which defines iterations as updates to the iterates _t that decrease the function value, and uses a -loop to backtrack. Since it more closely resemble standard ways backtracking line-search is described, some reader may find it easier to understand. We stress, however, that this is still the same algorithm as <ref> but written differently. The pseudocode in Figures <ref>–<ref>, are expressed in a modular form to highlight how the algorithm works and its similarity to a line-search. In <ref>, we give a more directly implementable pseudocode of multidimensional backtracking in both box and ellipsoid variants solely relying on vector notation.   §.§ Subroutines for standard backtracking line-search Implementation of the subroutines for the standard backtracking line-search. Although written in terms of sets, the algorithm only needs to maintain the maximum step-size in the interval [0, α_max] at each iteration. The corresponding preconditioners are the matrices = {α : α∈ [0,α_max]}. §.§ Separating hyperplanes used by multidimensional backtracking Both versions of multidimensional backtracking need a direction to update the set of preconditioners in the subroutine. We define the subroutine in <ref>. The description of the separating hyperplane and their properties can be found in <ref> and <ref>. §.§ Multidimensional backtracking using boxes The implementation of multidimensional backtracking with boxes only needs to maintain a vector , representing the maximum step-size for each coordinate that has not been ruled out, in the box (). The associated sets of preconditioners are () = {∈^d : ≤}, = {() : ∈()}. The description of boxes and the theoretical guarantees when using them in multidimensional backtracking can be found in <ref> and <ref>. The subroutines used by the algorithm with boxes are: * : initializes to c_0 so that the diagonal preconditioner c_0 is in _0. * : backtracks from the largest diagonal in (), returning γ(). * : computes the vector defining the half-space of invalid preconditioners _>() obtained when the preconditioner fails the Armijo condition at as described in <ref> and <ref>. * : returns the minimum volume box (^+) containing the intersection () ∩_≤(). §.§ Multidimensional backtracking using ellipsoids The implementation only needs to maintain a vector representing the diagonal of the matrix defining the (centered, axis-alligned) ellipsoid () and the associated set of preconditioners given by () = {∈^d : , () ≤1}, = {() : ∈()}. The description of the ellipsoids and their properties can be found in <ref> and <ref>. The subroutines used by the algorithm with boxes are: * : initializes to (1/d c_0^2) so that c_0 ∈(), implying the diagonal preconditioner c_0 is in . * : backtracks from the diagonal preconditioner in that maximizes the gradient norm. Let () be the set of candidate diagonals and define = (). The subroutine returns γ_max, where _max _∈ ∇f()^2_. Writing this in terms of the diagonal vector _max(_max) yields _max = _∈() ∇f()^2_(), = _ ∇f()^2, : _ ≤1 = ^-1 ∇f()^2/∇f()_^-1, where ∇ f()^2 = ∇ f() ⊙∇ f(). * : computes the vector defining the half-space of invalid preconditioners _>() obtained when the preconditioner fails the Armijo condition at as described in <ref> and <ref>. * : returns an ellipsoid (^+) containing the intersection of () ∩_≤() with guaranteed volume decrease from (). As there is no closed-form solution for the minimum volume ellipsoid, we set ^+ as a convex combination between the original ellipsoid () and the minimum volume axis-aligned ellipsoid containing _≤(), given by (^2), that is, ^+ λ + (1-λ) ^2, where λℓ/dd-1/ℓ-1 and ℓ^2_^-1, where (). Although the above choice of λ has guaranteed volume decrease, we can find a better value of λ by solving the minimum volume ellipsoid as a function of λ numerically. Namely, approximating λ^* _0 < λ< 1 - log((λ() + (1-λ) (^2))).`' In our experiments, we start with λ as in (<ref>) and, starting from it, we solve the above minimization problem numerically using L-BFGS-B <cit.> in SciPy <cit.>. This preserves the theoretical guarantee while improving empirical performance. §.§ Implementable pseudocode The pseudocode in Figures <ref>–<ref> are expressed in a modular form to highlight how the algorithm works and its similarity to a line-search. In this section, we give a more directly implementable pseudocode of multidimensional backtracking, in both the box and ellipsoid variants, using mostly vector notation. Scalar operations on vectors such as /, √(), ^2 are understood to be taken element-wise. § OPTIMAL PRECONDITIONERS, VALID PRECONDITIONERS AND COMPETITIVE RATIOS In <Ref>, we defined the optimal preconditioner as the preconditioner that is the best overall approximation to the inverse Hessian. Formally, we define the optimal diagonal preconditioner as _≻0, diagonal κ such that 1/κ^-1≼∇^2 f() ≼^-1 for all . <ref> One way to interpret this definition is that ^-1 is the tightest diagonal approximation to ∇^2 f(). We remark that we do not need f to be (strongly-)convex to define the theoretically optimal step-size of 1/L for gradient descent. Thus, one may wonder why we need strong-convexity (although we relax this to requiring f to be PL in <ref>) to define what an optimal preconditioner is in (<ref>). The main difference between the scalar step-size and per-coordinate step-sizes settings is whether the “largest” step-size or preconditioner is well-defined. In the scalar setting, the largest step-size that is guaranteed to lead to progress everywhere (i.e., a step-size that satisfies the Armijo condition (<ref>) for all ) is well-defined and equal to α_* 1/L for L-smooth function f. Equivalently, α_* = sup* α> 0∇^2 f() ≼1/α = sup_∈^d λ_max(∇^2 f()), where λ_max(∇^2 f()) is the largest eigenvalue of ∇^2 f(). But in the case of preconditioners, the ordering on positive definite matrices is not complete, so there is no single “largest” preconditioner that satisfies ∇^2 f() ≼^-1. We can still describe “good” preconditioners, that are guaranteed to satisfy the Armijo condition (<ref>) everywhere; this is the notion of valid preconditioners defined in <ref>, which in set notation is {≻ 0: ∇^2 f() ≼^-1}. With this definition, we can consider the set of valid preconditioners for which there are no bigger valid preconditioners, that is, {∈ : ∄ ' ∈ s.t. ≺'}. However, contains incomparable preconditioners, that is, distinct matrices , ∈ that neither ≽ nor ≼ hold. Let us look at an example with a quadratic function (illustrated in <ref>) f() = 1/2, with Hessian = .5 .1 .1 1.0 . There are many preconditioners that are valid,[ Up to invertibility issues which we address in the next subsection. ] for example using the per-coordinate step-sizes _L ≈ .91 0 0 .91 , _1 = 2.0 0 0 0.0 , _2 = 0.0 0 0 1.0 , ≈ 1.75 0 0 0.87 . The preconditioner _L corresponds to the 1/L step-size, _1 and _2 take the largest possible step-size in each coordinate, and is the optimal preconditioner according to <ref>. Those preconditioners are not comparable to each other, as neither _L ≺ nor ≺_L hold. Instead of looking at the matrices themselves, we use in (<ref>) the condition number[Our definition is slightly different, but both notions are equivalent for positive definite .] of ^1/2∇^2 f() ^1/2 as a measure of quality of . This allows for a well-defined optimal preconditioner as this condition number can be maximized. §.§ Defining optimal preconditioners without twice-differentiability or strong-convexity Although we used twice-differentiability of f to define the optimal preconditioner, this is not necessary. If f is not twice-differentiable but still strongly-convex, the definition in <Ref> can be replaced by <ref>, as finding the -norm under which the function is most strongly-convex. = _≻0, diagonal κ such that { [ 1/κ 1/2- ^2_^-1 ≤f() - f() - ∇f(), - ,; f() - f() - ∇f(), - ≤1/2- ^2_^-1, ] . for all , . To avoid strong-convexity, we can instead use the PL inequality. A function f is μ-PL if 1/μ 1/2∇f()^2 ≥f() - f(_*). This property is implied by μ-strong convexity. We refer to the work of <cit.> for the properties of PL functions and its relation to other assumptions. To adapt <ref> to our results, we can measure the PL constant μ in the norm induced by , and say that f is μ-PL in ·_ if 1/μ 1/2∇f()^2_ ≥f() - f(_*). We use this inequality in the convergence proof in <ref> since it is a consequence of strong-convexity. As this property is the only property of strong-convexity needed for our results, we can adapt our results to be competitive with the optimal preconditioner defined using the PL inequality, using the definition ^pl := _≻0, diagonal κ such that { [ 1/κ ∇f()_^2 ≥f() - f(_*) for all ,; f() - f() - ∇f(), - ≤1/2- ^2_^-1, for all , . ] . If f is μ-PL and L-smooth, <ref> has a feasible solution at = 1/L number κ = L/μ. The constraint based on the μ-PL condition in <ref> is weaker than the definition using strong-convexity, as strong-convexity implies the PL inequality. The optimal preconditioner defined using the PL inequality (<ref>) might thus achieve a lower condition number than the one using strong-convexity (<ref>). For example, the quadratic f() = (1/2), with a positive semi-definite is not strongly convex if the smallest eigenvalue of is 0. The optimal preconditioner in <ref> is ill-defined (or has condition number κ_* = ∞). In contrast, the optimal preconditioner defined using the PL inequality in <ref> has a finite condition number, as = 1/L is a feasible solution with condition number κ = L/λ_min^+() where λ_min^+() is the smallest non-zero eigenvalue of . As our proofs only use the properties guaranteed by <ref>, our results also apply to PL functions. §.§ Valid and optimal preconditioners with singular matrices In the main text, we defined valid preconditioners (<ref>) only for positive definite matrices for ease of presentation. The notion of valid preconditioners can be extended to general positive semidefinite matrices. In the diagonal case, the convention 1/0 = +∞ is a useful mental model but can cause inconsistencies (such as ∞· 0). To extend the notion of valid preconditioners to general positive semidefinite matrices, we can use the definition A preconditioner ≽ 0 is valid if ^1/2∇^2 f() ^1/2≼ I for all ∈^d. The above is well-defined for all positive semidefinite matrices. An alternative to arrive at a definition closer to <ref> is to consider the projection matrix Π_ onto the image of , given by Π_ = ^1/2(^1/2)^† where ^† is the Moore-Penrose pseudo-inverse of . Using that, one can show that is valid (according to <ref>) if and only if Π_∇^2 f() Π_≼^† for all ∈^d. An example of a valid preconditioner that is covered by <ref> but not <ref> is the all-zeroes matrix. <ref> can seamlessly replace <ref>, and all the results follow similarly. Moreover, notice that the optimization problem defining the optimal preconditioner (<ref>) may not attain its minima on positive definite matrices when f is not strongly convex. In this case, we can define an optimal preconditioner as a limit point of a sequence that attains in the limit the value in (<ref>) by replacing the minimum with an infimum. In this case, an optimal preconditioner may be singular, but the results in the main body also follow seamlessly using this definition. We decided to restrict our attention to non-singular preconditioners in the main paper for ease of exposition, since when f is strongly-convex, an optimal preconditioner is always non-singular. §.§ Best competitive ratio achievable by the optimal preconditioner In <ref>, we mentioned that the optimal preconditioner could be only 1/d-competitive. In fact, the competitive ratio of can be arbitrarily bad. The reason for this is that the competitive ratio γ does not compare against , but rather against any in the set of potentially valid preconditioners. Moreover, this definition only takes into account the norm ∇ f()_ at a fixed , while the optimal preconditioner needs to have large norm for all . For example, consider the scalar step-size case. If our current interval of candidate step-sizes to try is = [0,1] but the optimal step-size α_* is small, let us say α_* = 1/10, then α_* is only 1/10-competitive in . The motivation for this definition of competitive ratio is that we cannot check whether α is large compared to α_* (as we do not know α_*) but we can more easily ensure that a candidate step-size α is γ-competitive in (for example α = 1/2 is 1/2-competitive in [0,1]). In the previous example, the bad competitive ratio of α_* in was mostly due to the fact that was large and that, for some , step sizes larger than α_* could satisfy the Armijo condition (<ref>). Even if α_* is globally optimal, we could make more progress by using a larger step-size if they were to be accepted, and we have not yet ruled out those step-sizes. However, as shrinks, it may eventually converge to the interval [0,1], in which case the optimal step-size α_* would be 1-competitive. In high dimensions however, the optimal preconditioner can have a competitive ratio of 1/d even when comparing only against valid preconditioners.[ How small the set _t can get is bounded by construction. The cutting plane procedure in <ref> only remove invalid preconditioners. The valid preconditioners contained in the initial set _0 will always be in _t, along with possibly more preconditioners that have not been deemed invalid over the course of optimization. ] This is because the competitive ratio is defined using the -norm of the gradient, and we need to take the direction of the gradient into account. For example, consider the quadratic function (illustrated in <ref>) f() = 1/2, where = 1 -1 -1 1 , with eigenvalues {2,0} as = [-1,1]^ [-1,1]. The following three preconditioners are all valid: _1 = 1 0 0 0 , _2 = 0 0 0 1 , and = 1/2 0 0 1/2 . The preconditioner _1 takes the largest possible step-size in the first coordinate and ignores the second, while _2 does the opposite. They are not good global preconditioners, as each ignores one coordinate. Yet, they can make much more progress (i.e., the objective value may decrease more) than the optimal preconditioner if the gradient is very skewed towards one coordinate. This implies that may be only 1/2-competitive in {_1, _2} for some since if ∇f() = 1 0, then ∇f()__1^2 = 1, ∇f()__2^2 = 0, ∇f()_^2 = 1/2, and  if ∇f() = 0 1,  then ∇f()__1^2 = 0, ∇f()__2^2 = 1, ∇f()_^2 = 1/2. The preconditioner is still a better choice globally (i.e, for all ) since it ensures optimal worst-case linear rate in preconditioned gradient descent. But there are better preconditioners that depend on the current gradient. We exploit this in the ellipsoid variant of multidimensional backtracking to improve our competitive ratio. We backtrack from the preconditioner that maximizes the local progress guarantees to ensure a 1/√(d) competitive ratio, while ensuring volume shrinkage of the set of candidate preconditioners when we call , if the preconditioner fails the Armijo condition. § SEPARATING HYPERPLANES In this section, we prove Propositions <ref> and <ref> on existence and strengthening of separating hyperplanes for valid preconditioners. General idea. Let us start with a summary of the separating hyperplanes used to search for good preconditioners as discussed in <Ref>. The goal of the separating hyperplanes is to give us ways to shrink the initial set of potential preconditioners to narrow in on valid preconditioners using the cutting-plane methods in <ref>. At each iteration we are looking for preconditioners that satisfy the Armijo condition at given by f(- ∇f()) ≤f() - 1/2∇f()^2_. If fails the Armijo condition, we conclude that is invalid. To obtain more information, we look at the condition as a function of the (diagonal of the) preconditioner, and define the gap function at , h() f(- ()∇f()) - f() + 1/2∇f()_()^2, ∀∈^d. Then, h() ≤ 0 if = () satisfies the Armijo condition at , and h() > 0 otherwise. Any preconditioner () such that h() > 0 is guaranteed to be invalid. We can use the gradient of h at and convexity to find a half-space such that one side contains only preconditioners with h() > 0. In this section, we show how to construct such half-space, and strengthen them using the partial order on matrices, which is needed to ensure volume shrinkage of our cutting plane methods. §.§ Stronger hyperplanes In the main body we presented the strengthening of separating hyperplanes via truncation (<ref>) after the result of existence of separating hyperplanes (<ref>). Here, we prove a more general lemma on strengthening half-spaces of invalid preconditioners first, as it is useful in simplifying the proof of <ref>. <ref> follows directly from the following lemma. Let _, α be the intersection of the non-negative orthant ^d and the half-space defined by the vector ∈^d and coefficient α > 0, _,α∈^d, > α. Define max{, 0 } and let _, α be defined similarly as above, that is, _, α∈^d, > α. If _,α only contains diagonals of invalid preconditioners, that is, () is invalid for any ∈_, Then _, α⊆_, α and _, α only contains diagonals of invalid preconditioners. Inclusion _,α⊆_,α. We have that , > α implies , > α for any ∈^d since , = ∑_i [i] ≥ 0[i] [i] + ∑_i [i] < 0[i] [i] ≤∑_i [i] ≥ 0[i] [i] = ∑_i [i] ≥ 0[i] [i] = , . _, α only contains invalid diagonals. Let _∈_, α. We can show that (_) is invalid by finding _∈_, α such that (_) ≼(_). Since (_) is invalid by assumption, this would imply that (_) is also invalid. To find _, we can truncate the entries of _ as _[i] _[i] if [i] ≥ 0 0 otherwise, ∀ i ∈{1, …, d}. Then _∈_, α since α < , _ = , _ = , _. [ One may worry that our original definition of valid preconditioners has a division by 0 if any entry of the preconditioner is 0 as a preconditioner is valid if ∇^2 f() ≼^-1 (<ref>). It is enough to use the convention that 1/0 = +∞, although this might lead to inconsistencies. In <ref> we discuss a more general definition without the use of infinities. ] and (_) ≽(_), as desired. §.§ Separating hyperplanes for invalid preconditioners We are now in position to prove <ref>. Throughout the proof, we shall denote by the matrix (). If f is convex, then h also is since the map ∈^d ↦ f( - ∇ f()) is the composition of an affine transformation and a convex function, and *∇ f()_^2 = ∇ f(), () ∇ f() is linear in . Convexity of h yields the inequality h() ≥ h() + ∇ h(), - , ∀∈^d. This implies that if is such that h() + ∇ h(), - > 0, then h() > 0, which implies that () is an  preconditioner. Rearranging we conclude that () is invalid for all in the set in (<ref>), i.e., in ∈^d∇ h(), > ∇ h(), - h() We express the above half-space as _>() = {: , > 1} for ∇h()/∇h(), - h(). Yet, for _>() to be equivalent to the set in (<ref>) or even to be well-defined, we need to ensure ∇ h(), - h() > 0. To see that this holds, note first that by convexity of h and that fact that h(0) = 0 we have h(0) ≥ h() + ∇ h(), 0 - ∇ h(), - 0 - h() ≥ - h(0) = 0 To show that the last inequality is strict, assume that ∇ h(), - 0 - h() = 0 for the sake of contradiction. By <ref>, the half-space ∈^d[∇ h()]_+, > 0 contains only diagonals of invalid preconditioners, where [∇ h()]_+ max{∇ h(), 0} entry wise. However, (1/L)∈ as [∇ h()]_+ ≥ 0 and should be invalid, which is a contradiction since f is L-smooth and 1/L is valid. Therefore, ∇ h(), - 0 - h() > 0. Finally, we can write in terms of f and . To do so, first define ^+ - ∇ f(), and the gradients of f at different points by ∇ f() and ^+ ∇ f(^+). Then, by the chain-rule, ∇ h() = -∇ f( - ∇ f()) ⊙∇ f() + 1/2∇ f() ⊙∇ f() = - ^+ ⊙ + 1/2⊙, which implies ∇ h(), - h() = - ^+, + 1/2, - f(^+) + f() + 1/2, = f() - ^+, - f(^+). Plugging these equations in the definition of yields = ∇ h()/∇ h(), - h() = (1/2 - ^+) ⊙/ f() - ^+, - f(^+) . Remark on assumptions of <Ref>. One may have noticed that we never use the assumption that fails the Armijo condition (i.e., that h() > 0) in the proof of the proposition. In fact, the proposition holds for any ∈^d. However, and crucially for our application, we have that is in the half-space _>() of invalid diagonals from <Ref>. In multidimensional backtracking, is the diagonal of a preconditioner () that failed the Armijo condition h() > 0. Since is close to the origin in multidimensional backtracking, we can ensure the half-space _>() contains a significant portion of our current set of candidate preconditioners, leading to significant shrinkage of the set of candidate preconditioners whenever is invoked. § CUTTING-PLANE METHODS §.§ Boxes Given a box () for some ∈^d and a vector ∈^d, our cutting plane method needs to find a box (^+) that contains () ∩_>() which, hopefully, has smaller volume than (). The next lemma gives a formula for the minimum volume box for any , which is used in the main text to define in <ref>. Moreover, we show that if the half-space _>() is close enough to the origin (since otherwise we might have ^+ =), then we have a significant volume decrease. Let ∈^d and ∈(). Let ∈^d. Then the box (^+) with minimum volume that contains () ∩_≤() is given by (using the convention that 1/[i] = + ∞ if [i] = 0) ^+[i] min{[i], 1/[i]}, ∀ i ∈{1, …, d}, Moreover, if (1/2d) · is excluded by the half-space, that is, ∈_>(), then ((^+)) ≤ (1/(d + 1))((^+)). Formula for ^+. Finding the minimum volume box containing () ∩_≤(), ^+ = _∈^d(()) s.t. () ∩_≤() ⊆(), is equivalent to finding the solution to the following optimization problem: ^+ = _∈^d∏_i [i] s.t. max_∈() ∩_≤()[i] ≤[i] for each i ∈{1, …, d}. As the constraints separate over the coordinates, the minimization can be done for each coordinate separately. As the function is increasing in [i], the minimum is achieved by making all the constraints tight, which giver the formula for ^+ in the statement of the lemma. Volume decrease. Let us prove the second part of the statement. Thus, assume for the remainder of the proof that (1/2d) ·∈_>(). We first show that ((^+)) ≤ (1/(d + 1))((^+)) if we assume that the update from () to (^+) shrinks the box in only one coordinate, i.e., i ∈ [d][i] > 1/[i] = i ∈ [d]^+[i] ≠[i] has exactly one element. Assume the above holds and = {j}. Then, as (1/2d) ·∈_>() implies , (1/2d) > 1, 1 < , (1/2d)≤1/2d([j][j] + d - 1) (d + 1) 1/[j]≤[j]. This together with the fact that ^+[i] = [i] for all i ≠ j and ^+[j] = 1/[j] yields ((^+)) = ∏_i = 1^d ^+[i] = 1/[j]·∏_i ≠ j[i] ≤1/d+1∏_i = 1^d [i] = 1/d+1(()). To complete the proof, we only need to show we may assume (<ref>) holds. Assume the opposite, that is, that there are two distinct coordinates that shrink from ^+ to . We will show that the volume shrinks more, meaning the above bound also applies. Formally, assume there are j,k ∈ that are distinct. For this part, it will be useful to denote by ^+() the point defined in <ref> for a given vector . We will show we can construct ' ∈^d such that (^+()) ≤(^+(')) while maintaining the property (1/2d) ∈_>(') and such that ^+(')[i] ≠[i] for all i ∈∖{j}, which makes (<ref>) follow by induction. Indeed, define ' ∈^d by '[i] [i] for i ∉{j,k}, '[j] 1/[j], and '[k] [k] + [j]/[k][j] - 1/[j]. First, note that (1/2d)∈_>(') since ' - , = [j]('[j] - [j]) + ('[k] - [k]) [k] = [j](1/[j] - [j]) + ([j]/[k]( [j] - 1/[j])) [k] = 0 and, thus, 1 < , (1/2d) = ', (1/2d). Let us now show that ((^+())) ≤((^+('))). Since ^+()[i] = ^+(')[i] for i ∉{j, k}, we have ((^+()))/((^+('))) = ^+()[j]/^+(')[j]·^+()[k]/^+(')[k] = min([j], 1/[j])/min([j], 1/'[j])·min([k], 1/[k])/min([k], 1/'[k]) = 1/[j]/[j]·1/[k]/1/'[k] (since j,k ∈ and by (<ref>)) = 1/[j] [j]·1/[k]([k] + [j]/[k][j] - 1/[j]) = 1/[j] [j]·1/[k] [k]([k] [k] + [j] [j] - 1). To get that (^+()) ≤(^+(')), we can show that last line is bounded by <1. Using the substitution α[j] [j] and β[k] [k], we want to show that α + β -1/αβ < 1 αβ - α - β + 1 > 0 (α -1)(β -1) > 0. This holds if α > 1 and β > 1, is implied by j,k ∈ since α = [j] [j] > 1 and β = [k] [k] > 1. A simple induction shows we may assume (<ref>) holds. To see that (α + β - 1)/αβ < 1, note that Equipped with the above lemma, we are in position to prove <ref>. Property (a), holds by induction because, for any _t used in a call to , we have ^* ∈_≤(_t) since ^* is valid and since by <ref> the half-space _≤(_t) contains only diagonals of invalid preconditioners. For (b), fix t ∈{1, …, T} and recall that in this case we have _t = ()∈(_t) and = (1/2d) ·(_t). The competitive ratio of 1/2d follows since (_t) is the preconditioner that maximizes ∇ f(_t)_ for ∈(_t). Finally, for (c) by <ref> we have that every call to makes the volume of the set decrease by 1/c 1/(d+1). Moreover, one can easily verify that _t[i] ≥min{1/L, _0[i]} for all i ∈{1, …, d} since ((1/L)) contains only diagonals of valid preconditioners. Therefore, for _min[i] min{1/L, _0[i]}, the volume of (_t) cannot be smaller than (_min) for all iteration t. Therefore, the number of times is invoked is no more than log_c((_0))/((_min)) = log_c∏_i = 1^d _0[i]/_min[i]≤log_c((_0_∞ L)^d) = d log_c(_0_∞ L). as desired. §.§ Axis-aligned ellipsoids We now analyze the cutting-plane method using axis-aligned ellipsoids. Interestingly, the results that we prove in this sections are connected to some of the results from <cit.> via polarity theory. We defer a discussion on this connection to the end of this section. Different from the main body, it will be helpful for the analysis of the method and proofs of the results to not restrict ellipsoids to the non-negative orthant, as was done in the main text for ease of exposition. For any symmetric positive definite matrix ∈^d × d, define the ellipsoid given by by () {∈^d , ≤ 1 }. When is diagonal, we say that () is axis-aligned. Moreover, we may slightly overload our notation by defining () (()). General ellipsoids. Although we are ultimately interested in working solely with ellipsoids defined by diagonal matrices, we will start by looking at more general ellipsoids, and then exploit symmetry in our case to derive the result in <ref>. We start with an ellipsoid () where is a positive definite matrix. Then, given a vector ∈^d, we are interested in finding an ellipsoid the intersection of () with the half-spaces defined by and - that contain the origin, that is, the set () ∩∈^d, < 1∩∈^d- , < 1 =() ∩∈^d|, | < 1. The following theorem shows how to find an ellipsoid that contains the above intersection, and how to guarantee its volume is smaller than () if is large enough. Interestingly, note that ∈^d|, | < 1 = ∈^d(, )^2 < 1 = (^). The set (^) is a degenerate ellipsoid, in the sense that it is not a compact set, and any orthogonal to is contained in (^⊤). Still, the next theorem shows how to find a convex combination of () and (^)—which always contains () ∩(^)—that is guaranteed to have volume smaller than () if is large enough. The following result can be seen as the polar result of <cit.>. Let ∈^d × d be positive definite and let ∈^d. Let λ∈ (0,1) and define L(, ) λ + (1 - λ) ^. Then ()∩(^) ⊆(L(, )) and ((L(, ))) = √(λ/λ + (1 - λ)·ℓ·1/λ^d)·(()) In particular, if ℓ_^-1^2 > d and λ = ℓ/d·d-1/ℓ-1, then λ∈ (0,1) and ((L(, ))) = ν_d() (()) where ν_d() = √(1/λ^d·d-1/ℓ-1) = d/ℓ^d/2ℓ-1/d-1^(d-1)/2∈ (0,1). First, note that for any ∈()∩(^) and any λ∈ (0,1) we have , L(, ) = λ, + (1 - λ) , ≤λ + (1 - λ) = 1. Thus, (L(, )) ⊆() ∩(^). For the volume decrease, recall that for ellipsoids () we have (()) = V_d/√(()) where V_d is the volume of the unit sphere in ^d. By the matrix-determinant lemma, we have (L(, )) = 1 + 1 - λ/λ·, ^-1(λ) = 1 + 1 - λ/λ·ℓλ^d (). Therefore, ((L(, ))) = √(1/1 + 1 - λ/λ·ℓ·1/λ^d)·(()) = √(λ/λ + (1 - λ)·ℓ)·1/λ^d·(()). Finally, for λ defined as in (<ref>) we have 1 + 1 - λ/λ·ℓ = 1 + 1 - ℓ (d-1)/d (ℓ -1)d (ℓ -1)/ℓ (d-1)·ℓ = 1 + d (ℓ -1)/ℓ (d-1) - 1·ℓ, = 1 + d (ℓ -1) - ℓ(d-1)/ℓ (d-1)·ℓ = 1 + ℓ - d/d - 1 = ℓ -1/d -1, which yields the desired formula for ν_d(). On the norm of . The above theorem has a requirement on the norm of the vector that defines the half-space _≤(). However, in our cutting plane method we obtain from <ref> and <ref>, which do not have any guarantees on the norm of explicitly. Crucially, at any given iteration t of multidimensional backtracking with ellipsoids, we select a candidate preconditioner = (_t) such that _t_ = 1/√(2d). Then, if it fails the Armijo condition in (<ref>) and _t is as given by <ref>, then we have _t ∈_>(_t), that is, the separating hyperplane excludes _t. As we will show, this implies that _^-1 is large. Let ∈^d × d be positive definite and ∈^d be such that _≤γ for some γ > 0. Let ∈^d be such that ∈_>(). Then _^-1 > 1/γ. For the sake of contradiction, assume _^-1≤1/γ. Then _^-1·_≤ 1. Thus, by the Cauchy-Schwartz inequality, , = ^-1/2, ^1/2≤^-1/2·^1/2 = _^-1·_≤ 1. This is a contradiction since ∈_>() and, therefore, , > 1. On the volume decrease. Although the formula ν_d() in <ref> can be hard to interpret, we show a simple bound when _^-1^2 ≥ 2d. Let ∈^d × d be a positive definite matrix and ∈^d be such that _A^-1^2 > d. For c d/ℓ∈ (0,1) we have ν_d() ≤√(c · e^1 - c), where ν_d is defined as in (<ref>). In particular, if _A^-1^2 > d, then ν_d() ≤√(e)/√(2). Define ℓ_^-1^2 > d and c d/ℓ∈ (0,1). Then, 4ν_d()^2 = d/ℓ^dℓ-1/d-1^(d-1) = d/ℓ·d/ℓ·ℓ-1/d -1^(d-1) = c ·c ·d/c-1/d -1^(d-1) = c · d - c/d -1^(d-1) = c ·1 + 1 - c/d -1^(d-1) ≤ c · e^1 - c, where the last inequality follows since 1 + x ≤ e^x for all x ∈. In particular, note that c ∈ (0,1) ↦ c · e^1 - c is increasing since the derivative of the mapping is positive on (0,1). Thus, if _^-1≥ 2d, then c ≤1/2 and c · e^1-c≤ (1/2) · e^1/2. Exploiting symmetry. Let us now exploit symmetry to avoid using non-diagonal matrices in our ellipsoids. We use the notion of axis-aligned sets in the next few results. A set ⊆^d is axis-aligned if for any point ∈, the reflections of along the axes are also contained in . Formally, for any ∈{± 1}^d, we have that if ∈, then () ∈. Furthermore, with a slight abuse of notation define () (()). That is, () is the diagonal matrix whose diagonal entries match those of . The idea is that the set ∈^d() is valid of diagonals of valid preconditioners is contained in the non-negative orthant. Yet, we can extend it by reflecting it over each of the axes. Although this may seem counter-intuitive, this translates the structure of our problem into symmetry among all orthant, and this can be exploited elegantly. Formally, the set of diagonals of valid preconditioners reflected over each axis is given by set ∈^d() is valid, where is the entry-wise absolute value of ∈^d. The following lemma shows that when looking for low volume ellipsoids that contain an axis-aligned set, we can restrict out attention to axis-aligned ellipsoids, defined by a diagonal matrix. The following lemma can be seen as the polar statement of <cit.>, with the benefit of not requriring any matrix inversions. Let ⊂^d be an axis-aligned convex set and let ∈^d × d be positive definite matrix such that ⊆(). Then ⊆(()) and ((())) ≤(()). Let us start by showing that ⊆(()). We use the notation () · to denote the set () ·() ·∈. Since is axis-aligned, we have = () ·⊆() ·() = (() ()), ∀∈{± 1}^d. Therefore, is contained in each of the 2^d ellipsoids of the form (() ()). Thus, ⊆⋂_∈{± 1}^d(() ()) ⊆(1/ 2^d∑_∈{± 1}^d() () ), where the last inclusion follows since, for any set of positive definite matrices , one may verify that ∩_∈() ⊆((1/||) ∑_∈). Finally, note that ∑_∈{± 1}^d() () = (). Indeed, let i,j ∈{1, ⋯, d}. If i = j, then (() ())_i,j = _i,j for any ∈{± 1}^d. If i ≠ j, then ∑_∈{± 1}^d (() ())_i,j = ∑_∈{± 1}^d [i] ≠[j] (() ())_i,j + ∑_∈{± 1}^d [i] = [j] (() ())_i,j = 2^d-1· (-_i,j) + 2^d-1·_i,j = 0. Let us now show that ((())) ≤(()). Note that log((())) = log((())) - 12log(). Since log(·) is concave over positive definite matrices, we have log (()) = log(1/ 2^d∑_∈{± 1}^d() () ) ≥1/ 2^d∑_∈{± 1}^dlog( () () ) = 1/ 2^d· 2^d log() = log(). Therefore, log(((()))) = log((())) - 12log(()) ≤log((())) - 12log() =log((())), which implies that ((())) ≤(()). We are now in position to prove <ref>, which follows directly from the previous two results. By the assumptions in <ref> we have that () fails the Armijo condition <ref> condition and, thus, ∈_>(). This together with the assumption that _≤1/√(2d) imply via <ref> that _^-1≥√(2d). This allows us to use <ref> to find a new ellipsoid containing () ∩_≤() with the required volume decrease by <ref>. Yet, this ellipsoid may not be axis-aligned. We shall exploit the symmetry described in <ref> to show that the axis-aligned ellipsoid (^+(, )) enjoys the same guarantees. Formally, we need (^+(, )) to contain ()∩_≤(). Since ≥ 0, we have _≤() ⊆() ∈() ·∈_≤() for all ∈{± 1}^d. Thus, it suffices for (^+(, )) to contain () ∩(). From <ref> we know that () ∩() is contained in the ellipsoid given by the matrix λ() + (1 - λ) ^ for any λ, in particular for λ as in (<ref>) since _^-1 > √(d). Since () is axis-aligned, we can exploit symmetry using <ref>, which tells that () ∩() is contained in the ellipsoid given by the matrix (λ() + (1 - λ) ^) = (^+(, )), as desired. Finally, the bound on the volume follows by <ref> and the bound on ν_d() given by <ref> since _^-1≥√(2d). Finally, we are in position to prove <ref>, which follows almost directly from <ref>. Note that (a) holds by induction and since, by <ref>, we have (^*) ∈_≤(_t) for any _t used in a call to . For (b), fix t ∈{1, …, T} and recall that in this case we have _t = ()∈(_t). As described in (<ref>), one may verify that (_t^*) for _t^* given by _t^* 1/∇ f(_t)^2__t^-1·_t^-1∇ f(_t)^2 maximizes ∇ f(_t)_ for ∈_t. Since = (_t, 1/√(2d), _t) = 1/√(2d)(_t^*), we conclude that is 1/√(2d)-competitive. For (c), first note that we may assume (1/L)∈(α_0 I). To see that, assume (1/L) ∉(α_0 I), implying α_0 d > L^2. In this case, any candidate preconditioner computed by is always valid and, thus, we never call . To see this, let _0 α_0 be the matrix defining the initial ellipsoid. Then, by the definition of for ellipsoids we have that _0 = (_0) is such that _0__0^2 = α_0_0^2 = 1/2d < 1/2α_0/L^2. Therefore, _0[i] ≤1/L for all i ∈{1, …, d}, which implies that _0 is valid since _0 ≼1L. Let us look now at the case (1/L)∈(α_0 I). Therefore, (1/L) ⊆(_t) for all iterations t. Since the minimum volume ellipsoid containing the box ((1/L) ) is the unit sphere of radius 1/L, that is, ((L^2/d) ). Therefore, ((_t)) ≥(((L^2/d) )). Moreover, every time we call cut the volume of the ellipsoid goes down by 1/c √(e)/√(2). Therefore, the total number of calls to is no more than log_c((α_0 ))/(((L^2/d) )) = log_cL^d/d α_0^d/2≤d/log(c)logL/d α_0≤ 12 d logL/α_0 since log(c) ≥ 1/12. Refining the choice of λ. Although we have shown in <ref> a choice a λ that guarantees volume decrease, it may be sub-optimal. The choice of λ in <ref> is inherited from the non-symmetric case in <ref>. Although <ref> and <ref> match when has only one non-zero entry, we should expect better choices of λ, leading to more volume shrinkage, to be possible in <ref>. Although we have not found a choice of λ that is dependent on that generically improves upon (<ref>), in practice we can solve for a better λ numerically, by directly minimizing the volume of the resulting ellipsoid, min_0 < λ< 1 ((λ+ (1-λ) (^⊤))) = min_0 < λ< 1 -∑_i log(λ[i] + (1-λ)[i]^2). As the problem is one-dimensional, numerical solvers can often find near-optimal solutions. By warm-starting a numerical solver with the λ defined in (<ref>), we can guarantee that the resulting ellipsoid leads to a smaller volume and we do not lose our worst-case theoretical guarantees. Connection to the polar problem and related work. Our results have an interesting connection to some of the results from <cit.>, via the use of polarity theory. Here we give a quick overview of their work and the connection to our cutting plane methods. <cit.> shows techniques to approximate some polyhedron ⊆^d (a polymatroid being one of the main examples) from inside by some ellipsoid (). Their algorithm maintains an ellipsoid () ⊆ and tries to iteratively enlarge it. They assume access to an oracle such that, at each iteration, either finds a point ∈ that is sufficiently far from (), meaning _ > √(d) + ϵ for some ϵ > 0, or guarantees that () “approximates well” from inside in the sense that _≤ (√(n) + ϵ)/α for all ∈, where α > 0 is some approximation factor. In their algorithm, when the oracle finds a point ∈ such that _ > √(d) + ϵ the algorithm needs to find an ellipsoid (^+) such that (^+) ⊆conv(() ∪{, -}), where conv() is the convex hull of . Interestingly, the polar problem is exactly what we need for out cutting plane method. More precisely, the polar set ^* of a set is given by ^* z ∈^dz, x≤ 1. Then, by taking polars and using that ()^* = (^-1), we have that ^* ⊆(^-1). Moreover, taking polar on both sides of (<ref>) yields that an equivalent problem is finding (^+)^-1 such that ((^+)^-1) ⊇(^-1) ∩{-, }^* = (^-1) ∩ |, | ≤ 1. That is, the problem is the one of finding a smaller ellipsoid ((^+)^-1) that contains (^-1) ∩ |, | ≤ 1, which is broadly the goal of the subroutine . § EXPERIMENTS Objective functions We use L_2-regularized linear regression _linear and L_2-regularized logistic regression _logistic(), with a regularization coefficient of 1. Given a data matrix ∈^n× d, target y ∈^n for regression tasks and y ∈{0,1}^n for classification tasks, and parameters ∈^d, _linear() = 1/n 1/2 - ^2 + 1/2^2 . _logistic() = 1/n∑_i=1^n -[i] log(σ(_i, )) - (1-[i]) log(1-σ(_i, )) + 1/n1/2^2. where _i is the ith row of and σ is the sigmoid function, σ(z) = 1/1+exp(-z). For all datasets, we add a bias term by prepending a feature column of ones to . Datasets We use the datasets listed in <ref>, made available by LIBSVM <cit.>, Scikit-Learn <cit.> and the UCI repository <cit.>. Data rescaling We do not rescale, standardize or otherwise change any of the datasets beyond adding a bias term, as our goal is to check whether preconditioned methods can handle badly scaled data. Initializations We consider two types of initializations. The first approximates a “best-case” scenario where we start from an estimate with a reasonable loss value despite the bad scaling of the data. We set [i] = 0 except for the bias term [0] which is set at the MLE of the non-regularized problem, [0] = y̅ where y̅ = 1/n ∑_i=1^n [i] for linear regression, [0] = logy̅/1-y̅ where y̅ = 1/n ∑_i=1^n [i] for logistic regression. The results in the main text use this initialization. The second initialization takes ∼0,, giving a starting point with potentially large loss. We give results using both initializations in the appendix. Optimizers used * For the small linear regression problems, we use preconditioned gradient descent with the optimal preconditioner, pre-computed using the semidefinite formulation of <cit.>, solved using CVXPY <cit.> based on the Matlab implementation of . <https://github.com/Gwzwpxz/opt_dpcond> * Gradient descent with a backtracking line-search with backtracking parameter γ = 1/2. * RPROP <cit.> following the implementation and default hyperparameters in PyTorch <cit.> (starting step-size of 10^-1, increase step-size factor η^+ = 1.2, decreasing step-size factor η^- = 0.5, minimum step-size of 10^-6 and maximum step-size of 50). <https://github.com/pytorch/pytorch/blob/v2.0.1/torch/optim/rprop.py> * Hypergradient descent to set the step-size, using (S)GD-HD <cit.>. The hypergradient step-size is set to the default β = 0.02 <cit.>. The initial step-size is set to α_0 = 10^-10, as otherwise most runs diverged immediately. * The diagonal Barzilai-Borwein method of <cit.>, using their non-monotonic line-search. We use the default parameters suggested; a starting step-size of 10^-6, regularization factor on the previous diagonal approximation μ = 10^-6, a backtracking factor of 1/2 for the backtracking line-search and a window of 15 steps for the non-monotone line-search. This line-search does not use a forward step as the update can increase the preconditioner. * Preconditioned gradient descent using the diagonal Hessian, with a backtracking line-search. * AdaGrad <cit.> but augmented with a backtracking line-search as suggested by <cit.> to make it competitive in the deterministic setting, following the PyTorch <cit.> implementation. <https://github.com/pytorch/pytorch/blob/v2.0.1/torch/optim/adagrad.py> Line-search and forward steps For all methods, the backtracking line-search is augmented by a forward step. When a step-size is accepted, it is increased by a factor of 1.1 for the next step. For multidimensional backtracking, we increase the set uniformly, taking ' = 1.1 · for the box and ' = / √(1.1) for the ellipsoid. The ellipsoid uses a slightly smaller increase factor.[ To increase by a factor of 1.1 in the one-dimensional case, the update to the ellipsoid should be ' = / 1.1^2. ] Hyperparameters for the line-search and multidimensional backtracking For the backtracking line-searches used in gradient descent, preconditioned gradient descent and used to augment the other algorithms, we start the search at an initial step-size of 10^10 and backtrack by a factor of 1/2 when failing the Armijo condition, implemented generically as f(- ) ≤f() - 1/2∇f(), For multidimensional backtracking, we initialize the sets such that the first preconditioner is on the order of 10^10. Using the notation of <ref>, we use the scaling factor c_0 = d · 10^10 for the box variant and c_0 = √(d)· 10^10 for the ellipsoid variant. The first preconditioner tried by the box variant with backtracking factor γ = 1/2d is then 1/2· 10^10, and the first preconditioner tried by the ellipsoid variant (assuming the gradient is uniform, ∇ f(_0) ∝) is 1/√(2)· 10^10. §.§ Additional results Figures <ref>–<ref> give additional results on small linear and logistic regression problems and large logistic regression problems. Multidimensional backtracking has a consistent performance across problems and does not suffer from the extremely bad conditioning of cpusmall or california-housing (linear regression) or australian, breast-cancer, diabetes and heart (logistic regression).
http://arxiv.org/abs/2306.01825v1
20230602143503
Field equation of thermodynamic gravity and galactic rotational curves
[ "M. Pszota", "P. Ván" ]
physics.gen-ph
[ "physics.gen-ph" ]
^1 Eötvös Loránd University, Budapest, ^2Department of Theoretical Physics, Wigner Research Centre for Physics, H-1525 Budapest, Konkoly Thege Miklós u. 29-33., Hungary; ^3Department of Energy Engineering, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rkp. 3., Hungary ^4Montavid Thermodynamic Research Group, Budapest Modelling capabilities of Thermodynamic Gravity (TG) are compared to Dark Matter (DM) and Modified Newtonian Dynamics (MOND). The gravitational field equation of TG is solved using the mass distribution of the baryonic mass as the source of the gravitational field. In this paper, the numerical method is verified with the help of exact solutions. The method is compared to pseudo-isothermal (ISO) dark matter and MOND parametrisations of the velocity curve of galaxy NGC 3198. Field equation of thermodynamic gravity and galactic rotational curves M. Pszota^1,2,3, 4 and P. Ván^2,3,4 July 31, 2023 ====================================================================== § INTRODUCTION The most successful model in cosmology so far is the so-called ΛCDM model, where Λ stands for the cosmological constant in the Einstein-equation accounting for dark energy, and CDM stands for cold dark matter. The composition of DM is unknown. There is an ongoing theoretical, experimental and computational hunt to find proper candidates in the framework of existing or new particles of the Standard Model of Particle Physics <cit.>. The most well-known effect of dark matter is its influence on the rotation curves of galaxies. Properly modelling the observed galactic rotational velocity curves with DM requires particular dark matter distributions, <cit.>. However, in spite of the flexibility in modelling with the help of a potentially unknown distribution, there are some well-known problems and inconsistencies both in the case of individual galaxies and general, global properties. Regularities in baryon-DM coupling can be summarised in the Radial Acceleration Relation (RAR). This relation generalises the more well-known dynamical relations, like Renzo's rule or the baryonic Tully-Fischer relation. The RAR encompasses these effects for various types of galaxies and establishes the strong connection between the baryonic contribution and the rotation curve <cit.>. These regularities indicate that DM is not a simple passive background but interacts with the baryonic mass component. Related well-documented problems of the ΛCDM paradigm are the core-cusp problem, the missing satellite problem and the too-big-to-fail problem. The simulations that include dark matter predict that DM halo density profiles should rise steeply at small radii as ρ(r) ∝ r^-γ with γ≃ 0.8 - 1.4 for small galaxies. This is in contrast to observations with many low-mass DM-dominated galaxies. A related 'central density problem' also exists as simulations predict more dark matter in the central regions than they should host. These two issues (low-density cores and high-density cusps) are summarised as the core-cusp problem <cit.>. That is one of the reasons that the hydrodynamics-motivated pseudo-isothermal DM distribution is widely used for rotational velocity curve modelling. Because of the above-mentioned problematic aspects, there are several other proposals to explain the missing mass. The superfluid theories suggest a particular multi-component form of dark matter, <cit.>, where the explanation of rotational curves may come from a logarithmic field potential, <cit.>, or scale invariance, <cit.>. Among these suggestions, the most popular and most developed is MOND. Its success is based on the insight regarding the nonrelativistic, phenomenological observations, in particular, the galactic rotational curves. The reason became clear after the development of the first modified gravity version of MOND, called AQUAdratic Lagrangian theory (AQUAL), <cit.>. It has a vacuum solution with a force that is inversely proportional to the distance of the central mass: that can exactly counterbalance the centrifugal force and explain the typical flat velocity profiles. A different approach was proposed by Verlinde, whose emergent gravity (also known as entropic gravity) uses the idea that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. Using insights from black hole thermodynamics, quantum information theory and string theory, emergent gravity is motivated and borrows elements of various nonrelativistic constitutive theories. For the DM phenomena, emergent gravity is motivated by generalised nonrelativistic elasticity <cit.>. The AQUAL type field equation in the nonrelativistic limit can be obtained from a covariant version of the theory, <cit.>. Finally, in a theory developed by Ván and Abe, where gravitational potential is treated as a thermodynamic state variable, the standard Newtonian gravity can be derived. In an extension to the framework, where both the field energy and interaction energy is included in the total energy density up to a certain spatial scale, the inequality describing the entropy production rate in a local equilibrium state produces the following dissipative field equation for gravity: ∂φ/∂ t = l^2/τ( Δφ - K ( ∇φ)^2 - 4 π G ρ), where φ is the gravitational potential field, l is the macroscopic spatial scale of variations of the whole system, τ is the (characteristic) relaxation time, Δ is a Laplace operator, K is a characteristic coefficient of the dimensions s^2m^-2, G is the Newtonian constant of gravity and ρ is the mass density of a fluid. The stationary solution yields a field equation for a modified gravitational interaction, while the strength of deviation from the classical case is characterised by the K constant, which arises from the linear coupling between the gravitational and mechanical interactions. This leads to a dynamic crossover between Newtonian and non-Newtonian regimes, <cit.>. There, the relaxation to a static nonlinear field equation indicates the dissipative character of the theory. The nonlinear term is due to the presence of the gravitational field pressure, which is a consequence of the holographic properties of Newtonian gravity, and in general, the result of thermodynamic requirements, <cit.>. Therefore, due to the pressure of the gravitational field, there is a cross effect between the thermodynamic force of the gravitational interaction and the scalar part of the mechanical interaction and the nonlinear term of dissipative-field-eq emerges naturally. Therefore, the parameter K is not fundamental, it belongs to a particular transport property of the thermodynamic cross effect of gravitational field and mechanical momentum transport: the gravitational field transports material momentum and material momentum pulls the related gravity field along. K is a material constant that characterises the intensity of the aforementioned coupling in a galactic environment. Also, it has a direct physical interpretation, as v_K = K^-1/2 is a characteristic velocity of a flat rotational curve because F_K = 1/Kr is the force field in a static vacuum solution of the field equation in TG. In the TG theory, the source term of the field equation is the density distribution of the baryonic mass, the sum of the visible matter and the atomic hydrogen densities. The gravitational field and the corresponding centrifugal force can be calculated by solving the static part of dissipative-field-eq. In the following sections, we give a finite difference scheme to solve the field equations and verify the solution with the help of known exact solutions of the equation with constant, ρ_0 ≥ 0 density distributions. Finally, we demonstrate the applicability of the method, solving the equation with a realistic example, with the mass distribution of galaxy NGC 3198 as a source term. Here the sum of the visible part (referenced also as a stellar disk) and the atomic H component is used to solve the static part of the field equation above dissipative-field-eq. The numerical method is of a relaxational type, exploiting that the solution of the time-dependent field equations relaxes to the static solution. In the following, the exact solutions of the equation are treated. Then, a short description of the numerical method is given. Afterwards, we validate the method, showing that the numerical solution relaxes to the known exact solutions. Then, the velocity curve of the NGC 3198 galaxy is modelled, by visually fitting the K parameter. Finally, a short analysis and interpretation follow. § THERMODYNAMIC GRAVITY For the exact and numerical solutions, a spherically symmetric case will be solved, with the r variable denoting the distance from the centre, as before. In this case, equation dissipative-field-eq can be rewritten into the following form: ∂_t φ = l^2/τ( ∂_rrφ + 2/r∂_r φ - 4 π G ρ - K (∂_r φ )^2 ). §.§ Exact solutions In this section the analytical solution for the static part of φ(𝐫, t) from eq. <ref> in the spherically symmetric case is investigated. Let us introduce the following shorthand notation: ∂_r φ(r) =φ', ∂_rrφ(r) =φ”. If the field is time-independent, then φ(𝐫, t) can be written for φ(r) as φ” = -2/rφ' + 4 π G ρ(r) + K (φ' )^2, whereby using gravitational acceleration, which is by definition 𝐠 = - ∇φ, and here becomes g(r)= - φ', we may obtain the following form: g(r)' = -2/r g(r) - 4 π G ρ(r) - K g(r)^2. This non-linear Ricatti equation has well-known solutions when the density is zero or a constant value. §.§.§ Vacuum solution When we consider the vacuum solution, i.e. ρ(r) = 0, eq. <ref> becomes g(r)' = -2/r g(r) - K g(r)^2, which is a Bernoulli-type equation, and can be solved by introducing the following: w(r) = g(r)^-1, whereupon eq. <ref> becomes: w(r)'-2/r w = K. This is a first-order linear ordinary differential equation, and has the following solution: w(r) = - C r^2-K r. By substituting this into eq. <ref>, we can obtain the following solutions: g(r) = - 1/K r + C r^2, φ (r) = - 1/Klnr/K + Cr + φ_0. This solution has a singularity at r = 0, so it may only be used after the effect of the mass distribution is negligible (e.g. far from the bulk of the galactic mass). The C constant then may be fitted using the boundary condition at the transition to this regime. By assuming that the K = 0 case is the Newtonian solution with a central point mass of M, we may obtain further insight into this parameter as well: -G M/r^2 = g(r)_K=0 =- 1/C r^2, C := 1/G M_aa. This indicates that the parameter corresponds to an apparent asymptotic equivalent point-mass (M_aa) at the centre of our field. §.§.§ Solution with constant density We may solve eq. <ref> analytically if the density is some ρ_c constant. Then a general solution can be written as g(r) = 1/Kr + √(- 4 π G ρ_c K)/Ktan( √(-4 π G ρ_c K)· r + C ). For further consideration, let us examine the cases where K > 0 and K < 0. In the K > 0 case, we can rewrite eq. <ref> as g_+(r) = 1/Kr + √( 4 π G ρ_c/ K)· i tan(i √(4 π G ρ_c K)· r + C ), iC = C, g_+(r) = 1/Kr - √( 4 π G ρ_c/ K)·tanh( √(4 π G ρ_c K)· r + C), and in the K < 0 case, as g_-(r) = -1/|K|r - √( 4 π G ρ_c/ |K|)·tan( √(4 π G ρ_c |K|)· r + C ). Since for this constant density mass model we desire to eliminate the singularity at r= 0, we may do so by appropriately choosing C or C. By choosing C = i π/2, the tanh(x) function becomes (x), and will have the following series representation in the equation: g_+(r) = 1/Kr - √( 4 π G ρ_c/ K)·( √(4 π G ρ_c K)· r ), g_+(r) = 1/Kr - √( 4 π G ρ_c/ K)·( 1/√(4 π G ρ_c K)· r + √(4 π G ρ_c K)· r/3 - (√(4 π G ρ_c K))^3 · r^3/45 + 𝒪(r^5 ) ), g_+(r) = -1/3· 4 π G ρ_c · r + K/45(4 π G ρ_c )^2 · r^3 + 𝒪(r^5 ), and thus eliminates the singularity. We can note that the first term corresponds to the usual result in the case of Newtonian gravity, while the higher-order terms, which also contain the K parameter, serve as corrections. For g_-(r), we can choose C = π/2 and obtain similarly: g_-(r) = -1/|K|r + √( 4 π G ρ_c/ |K|)·( √(4 π G ρ_c |K|)· r ), g_-(r) = -1/|K|r + √( 4 π G ρ_c/ |K|)·(1/√(4 π G ρ_c |K|)· r - √(4 π G ρ_c |K|)· r/3 - (√(4 π G ρ_c |K|))^3 · r^3/45 + 𝒪(r^5) ), g_-(r) = -1/3· 4 π G ρ_c · r - |K|/45(4 π G ρ_c )^2 · r^3 + 𝒪(r^5 ), which is the same in the first two orders as in the previous case, the difference only appears in terms containing higher orders of K, as the two series diverge. §.§ Numerical solution This chapter details the numerical method for the solution of eq. <ref>. Here, generally, ρ(r, t) can be a function of time. Still, the characteristic speed of matter will be considered negligible relative to the relaxation of the potential. This is justified by experimental boundaries on the possible speed of change of the gravitational field, summarised in the work of Diósi <cit.>, and the characteristic timescale is a millisecond. This insight also serves as a justification to numerically treat the gravitational potential as relaxing to the stationary solution in eq. <ref>. §.§.§ Dimensionless formalism In order to solve the equation, a dimensionless form is created, by introducing the following notation: ρ = ρ_d ρ̃, l^2/τ = a, r = r_d r̃, φ = φ_d φ̃, K = 1/φ_dK̃, t = t_d t̃→ t_d = r_d^2 τ/l^2 = r_d^2/a, where the quantities with the index d denote a value with appropriate dimensions and the values with a tilde are dimensionless. Substituting these into eq. <ref>, we obtain the following: ∂( φ_d φ̃)/∂( t_d t̃) = a ( ∂^2 (φ_d φ̃)/∂(r_d r̃)^2 + 2/r_d r̃∂( φ_d φ̃) /∂(r_d r̃) - 4 π G ρ_d ρ̃ - 1/φ_dK̃( ∂( φ_d φ̃) /∂(r_d r̃))^2 ), ∂φ̃/∂t̃ = ∂^2 φ̃/∂r̃^2 + 2/r̃∂φ̃/∂r̃ - 4 π G ρ_d r_d^2/φ_dρ̃ - K̃( ∂φ̃/∂r̃)^2 . Let us also choose 4 π G ρ_d r_d^2/φ_d!= 1. By leaving the tilde notation, we obtain the equation analogue to eq. <ref> in the following form: ∂_t φ = ∂_rrφ + 2/r∂_r φ -ρ -K ( ∂_r φ)^2. For the numerical method, r_d is chosen to be the furthest point of the velocity curve data and ρ_d as the mean of the derived density distribution, as can be seen later in eq. <ref>. §.§.§ Discretisation In order to solve eq. <ref> numerically, a discretisation method of staggered grids is employed <cit.>. This means that the derivative of the field is introduced as q = ∂_r φ, which allows us to rewrite eq. <ref> as a system of two first-order differential equations: ∂_t φ = ∂_r q + 2/r q -ρ -K q^2, q = ∂_r φ. The aim of this method is the numerical prescription of Neumann-type boundary conditions and the better numerical handling of the nonlinear term with the coefficient K. During the numerical solution, in a given timestep, the φ field is evaluated first, using the values of φ and q from the previous timestep, and then the derivate field q is evaluated, using the obtained new field value. In the staggered grids method, the two variables are shifted by Δ x /2 for better handling. The schematic representation of this can be seen in Fig. <ref>. For the terms containing q explicitly, the average of the q field at two spatially neighbouring points can be used. By denoting the spatial indices with j and the temporal indices with t, the two equations can be simply discretised in the following way: φ_j, t+1 = φ_j, t + Δ t (q_j+1, t-q_j, t/Δ x + 2/(1/2 + j ) ·Δ xq_j, t +q_j+1, t/2 - ρ_j - K ( q_j, t +q_j+1, t/2)^2), q_j, t+1 = φ_j, t+1 - φ_j-1, t+1/Δ x. In order to ensure the stability of the numerical method, a sufficiently small Δ t/Δ x^2 ratio was chosen. Unless otherwise noted, for the numerical calculations N = 200 was used. The initial conditions for the arrays φ and q were chosen to be φ(t=0) = 0, q(t= 0) = 0, except in the boundaries, while the Neumann-type boundary conditions were q(r=0) = 0, q(r=r_d) = v_observed(r_d)^2/φ_d, during the galaxy velocity curve calculation. For the reliability testing, the boundaries were set to the corresponding values of the theoretical curves. §.§.§ Convergence and reliability In order for the numerical solution to yield a good enough approximation of the static solution, the number of the timesteps needs to be sufficiently high. To ensure this, the number of temporal iterations T is set to fulfil the following: T ·Δ t = 1 → T = 1/Δ t. The relaxation process is shown for the relaxation of the potential and the derivative in Fig. <ref>. As can be seen, the method converges fast, and this choice provides enough timesteps for a good convergence to the theoretical solution. To test the reliability of the numerical method, its results are compared to the theoretical results from eqs. <ref>, <ref> and <ref>. Note that due to the nonlinear term, the numerical method and the theoretical solutions can only be compared with an opposite sign of K. The results are shown in Fig. <ref>. We can see that the numerical solution reproduces the exact theoretical solution for constant density, but only reproduces the theoretical solution closer to the boundary for vacuum in the K ≪ C regime, that is, when the effect of the apparent mass is much greater than the effect of the nonlinear term. By choosing the appropriate scale, we can see the crossover as in <cit.> on Fig. <ref>. Furthermore, the joint solution of a nonzero density core surrounded with vacuum is tested, where ρ(r) = ρ_c in the interval r ∈ [0,1/2 r_d ), and ρ(r) = 0 in the interval r ∈ [1/2 r_d, r_d]. To determine the C parameter in the vacuum solution, the continuity of the derivative was prescribed: g_±(1/2 r_d ) = g_vacuum(1/2 r_d), C = -4/g_±(1/2 r_d ) r_d^2 - 2K/r_d. The boundary conditions are the corresponding constant density and vacuum derivatives. The numerical method is shown to reproduce the theoretical joint case in of dimensionless values ρ_c = 1, K = -10 and K = 10 in Fig. <ref>. Therefore we can trust that the numerical solution for the thermodynamically modified gravitational field will be reliable for realistic mass distributions as well, as long as they have similar characteristic parameter relations. Larger K values relative to ρ_c result in negative C as per eq. <ref> and are unlikely to be physically meaningful as per eq. <ref>. § ANALYSES OF ROTATIONAL VELOCITY CURVES §.§ Methodology To calculate velocity curves, the derivative field from the numerical solution is used to calculate the inferred rotational velocity: v_j = (φ_d/r_d r q_j,T)^1/2. The velocities corresponding to the surface densities for atomic hydrogen (HI) and the stellar disk (SD) were converted into an equivalent pseudo-spherical dataset to calculate density data. The original velocities are defined as the velocities each component would induce in the plane of the galaxy, and for the comparison a spherical case is considered. The work by de Blok et al. <cit.> uses the convention that the negative sign of the HI data means that it has a negative contribution when calculating the total v^2 sum, that is, it has an outward gravitational effect due to the lack of spherical symmetry and the specifics of its distribution. The observed velocity curve data is based on the 21-cm HI observations, the SD data is calculated from luminosity and the derived HI gas rotation curve is from the hydrogen column density. To calculate the ρ_j for the numerical method from the interpolated function of the rotation velocities v_rot(r), the following procedure was used: ρ_j = [∂ (v_rot, SD(r)^2 r )/∂ r]_j ·1/4 π G r_j^2 + [ ∂ (v_rot, HI(r)^2 r )/∂ r]_j ·1/4 π G r_j^2, and ρ_d was selected to be the mean of the array ρ_j: ρ_d = mean(ρ_j), ρ̃_j = ρ_j/mean(ρ_j). For comparison, the pseudo-isothermal (ISO) DM halo model is used, as in the work of de Blok et al. <cit.> and Randriamampandry and Carignan <cit.>. This model has the density profile ρ_ISO (r) = ρ_0 [ 1+ ( r/R_C)^2]^-1, where ρ_0 is the central density and R_C is the core radius of the halo. The corresponding dark matter rotation curve is given by v(r) = √(4 π G ρ_0 R_C^2 [1-R_C/rarctan( r/R_C) ]). For the galactic calculation, the ρ_0 and R_C parameters are taken from Table 3 of <cit.>, which contains mass models with fixed mass-luminosity ratios (Y^3.6_*, denoting the 3.6 μm band) and Diet-Salpeter stellar initial mass function. To obtain the velocity curve from the ISO DM model, the contributions of the SD and HI are added to this. Regarding the MOND model, the method and data of Randriamampandry and Carignan are used, <cit.>, which can be interpreted as the vacuum AQUAL solution substituted directly into the velocity calculation. The simple interpolating function of MOND is applied: μ (x)= x/1+x, and the fitted velocity curve is given by v^2(r) = √(v_SD^2 + v_HI^2)·√(a_0 · r +v_SD^2 + v_HI^2), with v_SD describing the contribution of the stellar disk (and bulge, if given separately) and v_HI describing the contribution of the neutral hydrogen. The acceleration parameter, a_0, is fitted to the observational data using eq. <ref>. §.§ Galactic rotational velocity curve of NGC 3198 In this section, the above-mentioned method is applied to reproduce the velocity curve of NGC 3198. §.§ Discussion NGC 3198 is a barred spiral galaxy known for its prototypical flat rotation curve, <cit.>. It is well-studied in the context of MOND, as the standard value of a_0 cannot predict the rotational velocity curve unless the distance of the galaxy is adjusted. Randriamampandry and Carignan found an acceptable MOND fit with a_0 about half the standard value. As seen in Fig. <ref>, thermodynamic gravity can very similarly match the rotation curves of the ISO DM model and also of MOND, with a_0 being allowed to vary. §.§ Mass estimation The datasets in the THINGS sample provide masses for the fitted mass models and using the fitted ISO DM parameters, the mass of the ISO DM halo can be calculated by integrating it up to the edge of the galaxy. For the purposes of this estimation, the edge of the galaxy will be considered to be at the last observational velocity curve datapoint (r_d). Using the insights from eqs. <ref> and <ref>, by considering the fitted K parameters for the galaxy, it is possible to estimate the apparent asymptotic mass of the galaxy by setting the acceleration inferred from the edge of the galaxy to equal to the vacuum solution: v_observed(r_d)^2/r_d = 1/K r_d + C r_d^2, C = 1/r_d( 1/v_observed(r_d)^2 -K ), M_aa = v_observed(r_d)^2/1-K v_observed(r_d)^2r_d/G. In the case of K = 0, eq. <ref> should approximately yield the gravitating mass required for the observed velocity for the galaxy and is expected from the total mass inside r_d with DM included. The values including dark matter or M_aa are rounded to two digits due to the large uncertainties included. The results are shown in Table <ref>: We can see, that the total baryonic and dark matter mass of NGC 3198, M_BM+ISO, is about the same as the apparent total mass of TG (expected at large distances), but the apparent asymptotic mass with the galactic K value, M_aa, while differing from the ISO DM mass, up to r_d results in the same gravitational effect. §.§ Conclusions In this work, the analytical solutions of the modified Poisson-like equation of TG were presented for the gravitational field in the cases of vacuum and constant matter density. Using the staggered grids-type discretisation, a numerical relaxation method was developed to solve the dissipative field equation of the theory. The static solution was obtained in the limit of the relaxation. The rotational velocity curve was calculated for galaxy NGC 3198 from the THINGS dataset, with only the baryonic matter contributing to the matter density source term <cit.>. The purpose of this analysis was to demonstrate the capability of TG to provide similar or better fits as MOND or the usual dark matter models, in this case, pseudo-isothermal dark matter (ISO). It is remarkable, that NGC 3198 was previously investigated for the possible fits by MOND <cit.>, and the ideally universal constant a_0 proved inadequate, as the best fits were only obtained when this parameter was allowed to vary, or the established distance of the galaxy was revised. The K parameter in the theory investigated by this work however, arises from the coupling between the gravitational and mechanical thermodynamic forces and fluxes, thus naturally it can be expected to vary across the galaxies, depending on their smaller-scale dynamics and characteristics. Based on the developed numerical method, a natural next step is to extend the analysis to other galaxies of the THINGS survey, with a detailed comparison of the alternative models. It is also natural to determine the effects of the galactic results on the scale of galaxy clusters. It may be the case that on much larger scales, the theory has effects similar to Newtonian or general relativistic gravity, except the apparent mass is larger due to the thermodynamic coupling, with a crossover between the regimes <cit.>. For simplicity and analytical comparisons, the numerical method worked with a spherically symmetric model of mass distribution, although the baryonic mass of the non-elliptical galaxies usually has a more disk-shaped distribution, or more precisely, for HI a thin disk and for the stellar matter, a sech^2 distribution is chosen in de Blok <cit.>. In future work for more faithful results, the solutions in a cylindrical coordinate system with appropriate models can be considered, along with revisiting and possibly incorporating more precise considerations during the thermodynamic derivation of the dissipative field equation, and more precise mass distribution models for the spherical DM halo, <cit.> § ACKNOWLEDGEMENTS We would like to thank Professor W. J. G. de Blok for providing us with the THINGS data <cit.>. We would also like to thank Professor S. Abe for providing insightful remarks regarding the theory. The work was supported by the grants National Research, Development and Innovation Office – FK134277. The research reported in this paper is part of project no. BME-NVA-02, implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021 funding scheme. unsrt 10 Bertone_2018 Gianfranco Bertone and Tim M. P. Tait. A new era in the search for dark matter. Nature, 562(7725):51–56, oct 2018. x17 A. J. Krasznahorkay, M. Csatlós, L. Csige, J. Gulyás, A. Krasznahorkay, B. M. Nyakó, I. Rajta, J. Timár, I. Vajda, and N. J. Sas. New anomaly observed in ^4He supports the existence of the hypothetical X17 particle. Phys. Rev. C, 104:044003, Oct 2021. neutrinomass Arghyajit Datta, Rishav Roshan, and Arunansu Sil. Imprint of the Seesaw Mechanism on Feebly Interacting Dark Matter and the Baryon Asymmetry. Phys. Rev. Lett., 127:231801, Dec 2021. Famaey2012 B. Famaey and S. S. McGaugh. Modified Newtonian Dynamics (MOND): Observational Phenomenology and Relativistic Extensions. Living Reviews in Relativity, 15(1), 2012. Lelli_2017 Federico Lelli, Stacy S. McGaugh, James M. Schombert, and Marcel S. Pawlowski. One Law to Rule Them All: The Radial Acceleration Relation of Galaxies. The Astrophysical Journal, 836(2):152, feb 2017. Li2018 P. Li, F. Lelli, S. McGaugh, and J. Schombert. Fitting the radial acceleration relation to individual SPARC galaxies. Astronomy & Astrophysics, 615:A3, 2018. lambdacdm-challenges James S. Bullock and Michael Boylan-Kolchin. Small-Scale Challenges to the ΛCDM Paradigm. Annual Review of Astronomy and Astrophysics, 55(1):343–387, 2017. BerKho15a L. Berezhiani and J. Khoury. Theory of dark matter superfluidity. Physical Review D, 92(10):103510, 2015. covariant-emergent-grav Sabine Hossenfelder. Covariant version of Verlinde's emergent gravity. Phys. Rev. D, 95:124018, Jun 2017. Zlo22a K.G. Zloshchastiev. Galaxy rotation curves in superfluid vacuum theory. Pramana, 97(1):2, 2022. Sco22a T. C. Scott. From Modified Newtonian Dynamics to superfluid vacuum theory. Entropy, 25(1):12, 2022. MaeGue20a1 A. Maeder and V. G. Gueorguiev. The scale-invariant vacuum (SIV) theory: A possible origin of dark matter and dark energy. Universe, 6(3):46, 2020. aqual J. Bekenstein and M. Milgrom. Does the missing mass problem signal the breakdown of Newtonian gravity? Astrophysical Journal, 286:7–14, November 1984. Ver17a E. P. Verlinde. Emergent Gravity and the Dark Universe. SciPost Phys., 2:016, 2017. Hos17a S. Hossenfelder. Covariant version of verlinde’s emergent gravity. Physical Review D, 95(12):124018, 2017. vanabe P. Ván and S. Abe. Emergence of extended Newtonian gravity from thermodynamics. Physica A: Statistical Mechanics and its Applications, 588:126505, 2022. AbeVan22a S. Abe and P. Ván. Crossover in Extended Newtonian Gravity Emerging from Thermodynamics. Symmetry, 14(5), 2022. Van23a P. Ván. Holographic fluids: a thermodynamic road to quantum physics. Physics of Fluids, 35(5):057105, 2023. arXiv:2301.07177v2. DIOSI20131782 Lajos Diósi. Note on possible emergence time of Newtonian gravity. Physics Letters A, 377(31):1782 – 1783, 2013. nummodszerek Kovács R. és Józsa V. Bevezetés a numerikus módszerekbe. Akadémiai Kiadó, 2019. RieEta18a Ágnes Rieth, Róbert Kovács, and Tamás Fülöp. Implicit numerical schemes for generalized heat conduction equations. International Journal of Heat and Mass Transfer, 126:1177–1182, 2018. PozsEta20a Áron Pozsár, Mátyás Szücs, Róbert Kovács, and Tamás Fülöp. Four spacetime dimensional simulation of rheological waves in solids and the merits of thermodynamics. Entropy, 22(12):1376, 2020. de_Blok_2008 W. J. G. de Blok, F. Walter, E. Brinks, C. Trachternach, S-H. Oh, and R. C. Kennicutt. High-Resolution Rotation Curves and Galaxy Mass Models from THINGS. The Astronomical Journal, 136(6):2648–2719, 2008. MONDvsDMhaloes T. H. Randriamampandry and C. Carignan. Galaxy mass models: MOND versus dark matter haloes. Monthly Notices of the Royal Astronomical Society, 439(2):2132–2145, 2014. vanAlbada1985 T. S. van Albada, J. N. Bahcall, K. Begeman, and R. Sancisi. Distribution of dark matter in the spiral galaxy NGC 3198. The Astrophysical Journal, 295:305, August 1985. GIORDANO201962 Domenico Giordano, Pierluigi Amodio, Felice Iavernaro, Arcangelo Labianca, Monica Lazzo, Francesca Mazzia, and Lorenzo Pisani. Fluid statics of a self-gravitating perfect-gas isothermal sphere. European Journal of Mechanics - B/Fluids, 78:62–87, 2019. chavanis2021selfgravitating Pierre-Henri Chavanis. The self-gravitating Fermi gas in Newtonian gravity and general relativity, 2021.
http://arxiv.org/abs/2306.12272v1
20230621134935
From structure mining to unsupervised exploration of atomic octahedral networks
[ "R. Patrick Xian", "Ryan J. Morelock", "Ido Hadar", "Charles B. Musgrave", "Christopher Sutton" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cs.CE", "cs.LG", "math.CO" ]
1.2 30pt30ptFrom structure mining to unsupervised exploration of atomic octahedral networks R. Patrick Xian^1,2,∗ Ryan J. Morelock^3 Ido Hadar^4 Charles B. Musgrave^3 Christopher Sutton^2,∗ 1.2 ^1Department of Engineering, University of Cambridge, Cambridge, United Kingdom. ^2Department of Chemistry and Biochemistry, University of South Carolina, Columbia, SC, USA. ^3Department of Chemical and Biological Engineering, University of Colorado Boulder, Boulder, CO, USA. ^4The Institute of Chemistry and the Center for Nanoscience and Nanotechnology, The Hebrew University of Jerusalem, Jerusalem, Israel. ^*Corresponding authors: R.P.X. (xrpatrick AT gmail.com), C.S. (cs113 AT mailbox.sc.edu) 20pt 1.3 Networks of atom-centered coordination octahedra commonly occur in inorganic and hybrid solid-state materials. Characterizing their spatial arrangements and characteristics is crucial for relating structures to properties for many materials families. The traditional method using case-by-case inspection becomes prohibitive for discovering trends and similarities in large datasets. Here, we operationalize chemical intuition to automate the geometric parsing, quantification, and classification of coordination octahedral networks. We find axis-resolved tilting trends in ABO_3 perovskite polymorphs, which assist in detecting oxidation state changes. Moreover, we develop a scale-invariant encoding scheme to represent these networks, which, combined with human-assisted unsupervised machine learning, allows us to taxonomize the inorganic framework polytypes in hybrid iodoplumbates (A_xPb_yI_z). Consequently, we uncover a violation of Pauling's third rule and the design principles underpinning their topological diversity. Our results offer a glimpse into the vast design space of atomic octahedral networks and inform high-throughput, targeted screening of specific structure types. 1.5 § INTRODUCTION The coordination polyhedron is a recurring theme in chemistry from molecules to solids <cit.>. Among inorganic compounds, coordination octahedra (COs) are one of the most common local atomic environments of metal-centered ions <cit.>. Networks of COs, or simply octahedral networks (ONs), display elaborate spatial patterns based on their orientation (i.e. tilting), distortion, and connectivity, which underlie the stability and functionalities of materials. The most prominent representatives of materials containing ONs are the perovskite structure family <cit.>. Besides the corner-sharing ONs in classic perovskites, their variants are prevalent in binary and multinary compounds <cit.>, metal-rich clusters <cit.> (with anion-centered COs), numerous hydroxide-based crystalline minerals <cit.> and functional materials <cit.>, etc. Besides their ubiquitous presence, these networks are a convenient abstraction of the underlying atomistic identity and are enumerable algebraically <cit.>, providing an intuitive surrogate to first-principles approaches for explaining the structure and dynamics of materials <cit.>. Therefore, it is not an understatement that cataloguing and documenting the architecture and distribution of ONs greatly facilitates our understanding of matter <cit.>. Designing materials that contain ONs may be viewed through the lens of spatial patterning or tiling. Although the ON design space extends in multiple directions <cit.>, in this work, we computationally examined two dimensions: chemical diversity and network connectivity, as illustrated in Fig. <ref>a, taking the perovskite family of materials as examples. Along the chemical diversity axis, alloying multiple ions can create new properties. Single perovskites have the formula unit ABX_3, where A, B are cations and X is an anion. The oxide subclass of single perovskites (ABO_3), which are well-studied, has a total member count approaching 5×10^3. This number quickly explodes to 10^7 for double oxide perovskites (AA^'BB^'O_6) <cit.> and much more for higher-order multinary perovskites. The network connectivity of the inorganic framework may be modified by pairing with selected organic cations in these structures, which in turn alter the structural stability and electronic properties without tuning the chemical diversity of the ON. This results in the distinctive connectivity patterns that characterize the hybrid (i.e. containing both organic and inorganic parts) perovskites and their extended families of materials, called perovskitoid or hybrid metalates <cit.>. Existing methods for the quantification of ONs in chemical structures date back at least half a century ago: The Glazer tilt system proposed in the 1970s <cit.> classifies the spatial orientation of corner-sharing COs in perovskites and is widely adopted by the community. The periodic nets introduced by Wells from the 1950s to the 1970s <cit.> and the structural hierarchy hypothesis introduced by Hawthorne shortly afterwards <cit.> provide an intuitive simplification of the ONs for understanding their similarity and complexity. In recent decades, advances in chemical synthesis have greatly expanded the diversity of compounds containing ONs (see Fig. <ref>b), which exhibit seemingly unlimited possible arrangements. Although the classification nomenclature for these modular structures is still under active debate, the materials families containing ONs are increasingly viewed as a continuum with shared chemical provenance and connectivity patterns <cit.>. Moreover, modern atomic-resolution electron microscopy is capable of characterizing local disordering at heterogeneous interfaces <cit.>, creating opportunities to broadly sample the configurational space and cross-examine experimental results with theoretical predictions. Driven by the growing quantity of structural data and the community efforts towards developing compact and efficient representations of molecular and materials systems <cit.>, we present here a computational workflow for the structural analysis and discovery of atomic ONs to obtain summary statistics and occurrence ranking. These are enabled by geometry processing <cit.>, unsupervised machine learning (manifold learning and clustering), and human label refinement with the use of a scale-invariant coordination network encoding (CNE) scheme. The collection of these methods reveals the octahedral tilting trends in oxide perovskite polymorphs and guides the identification of inorganic framework polytypes in hybrid iodoplumbates (i.e. lead iodide salts), which led us to discover the preference of connectivity in contrast with Pauling's third rule <cit.> in these materials and the power-law distribution of polytypes. § RESULTS Coordination network parsing. Central to the computational workflow in Fig. <ref>a is the parsing of atomic structures, first into coordination environments, and then into a coordination network. The coordination environment here refers specifically to the first neighbors of a central atom with a specific symmetry <cit.>, while the long-range connectivity between the environments is described by the coordination network. At the intermediate range, structural motifs such as network motif—the smallest recurrent subnetwork that captures the predominant connectivity patterns of the whole <cit.>—are formed. Each level of the structural organization may be evaluated for its specific properties. For example, the COs are viewed as superatomic single units in an ON that form a specific network motif, which summarizes their connectivity pattern. The spatial arrangement of the network motifs, thanks to symmetry, produces the ionic frameworks. We refer to structures with chemically identical frameworks but distinguishable network motifs as framework polytypes. This hierarchical organization is chemically interpretable <cit.> and facilitates an efficient programmatic implementation for structural parsing. We highlight here two aspects that make the approach effective here and leave the details in Methods. (1) Generalized definition of the axial tilting angle: An important CO-level property is the tilting angle. Glazer's original work on octahedral tilting <cit.> considers only small octahedral distortions in orthogonal lattices with corner-sharing motif. Here, we retain the Cartesian basis as the reference axes and use their angles formed with the corresponding octahedral diagonals as the tilting angles to accommodate other structure types involving COs. This procedure is automated by computationally assigning a right-hand spiral ordering of the vertices, as shown in Fig. <ref>b, contingent upon a screw axis. It also accommodates oblique lattices and more varieties of materials families where the discussion of axial tilting of COs is still meaningful <cit.>. (2) Bottom-up network construction: Because geometric complexes <cit.> formed by adjoining coordination polyhedra are not guaranteed to be convex, searching for connected geometric components makes their identification challenging. Our approach builds the coordination network data structure from the bottom up as a set of connected polyhedra, therefore the spatial relationships between the building units can be queried. We specify the motif of an ON by the properties of two graphs: (i) the mesh graph, constructed using atoms as the vertices and the unique edges of the coordination polyhedra as edges. The mesh specifies the boundary of the nearest-neighbor coordination environment and may contain non-manifold geometry at the adjoining vertices and edges <cit.>; (ii) the inter-unit graph, which uses the building units (i.e. the COs) as the vertices and the connectivity type as edges (see Methods), is a part-based representation of the atomic structure <cit.>. The combination of (i) and (ii) leads to fine-grained discernibility and bypasses (bond) distances <cit.> to enable scale invariance for distinguishing between materials structures, as will be benchmarked in the Supplementary Information. Next, we use our approach to analyze large materials structure datasets to gain insights into the design of complex (e.g. multivalent) perovskites and hybrid materials. Octahedral tilting distribution in oxide perovskite polymorphs. The discussions on octahedral tilting trends are normally based on the examination of specific subclasses of single perovskite structures that have been synthesized <cit.>. Proxies for the tilting angle <cit.>, such as the B-O-B angle, have often been used to estimate the octahedra orientation, and typically only small datasets of experimental data <cit.> have been examined in attempts to uncover trends between these proxies and materials properties. However, a large proportion of the abundant oxide perovskites found on earth persist in kinetically trapped metastable states, either as polymorphs with free energies above the convex hull vs. composition phase diagram or as precursors or intermediates that form during chemical synthesis <cit.>. Consequently, understanding the large design space of oxide perovskites requires evaluating many configurations, including those of metastable polymorphs. Using the workflow described in Fig. <ref>, we provide here a more complete picture enabled by the availability of a large dataset of oxide perovskite polymorphs from high-throughput computation <cit.> (see Methods), which includes a total of over 2000 structures divided into ten different Glazer tilt classes. We illustrate our results in Fig. <ref> using orthorhombic polymorphs in the a^+b^-b^- (#13) tilt classes, while additional results are provided in Supplementary Information (Section 1.1). Two distinctive trends stand out in Fig. <ref>: (1) There exists a clear dependence (the global trends) of tilting angles on the Goldschmidt tolerance factor <cit.> for almost all non-cubic polymorphs; (2) When ranked by the periodicity of the elements, we observe repeating patterns (the microtrends) of the tilting angles. The observation indicates that, in certain classes of polymorphs, the atomic structures obtained through isovalent substitution of A-site (or B-site) cations are highly predictable, which gives rise to a periodic table of oxide perovskite polymorphs. The microtrends reflect the dependence of chemical substitution on the structure (or property) in a substitution series. Fig. <ref>e illustrates that the A-site substitution series, such as lanthanide metalates (LnBO_3), has a monotonic change in all tilting angles as the ionic radius decreases (the lanthanide contraction). The microtrends are similar in orthorhombic lanthanide titanates (LnTiO_3), manganites (LnMnO_3), and gallates (LnGaO_3). The structural changes also correlate with the changes in their experimental properties as previously reported <cit.>. In B-site substitution series (see Fig. <ref>f), a similar pattern emerges, albeit across a broader range of substituted cations. These tilting microtrends within a large dataset can identify outlier materials directly based on their structural deviations. In the case of oxide perovskites, we find that the europium and ytterbium perovskites break the tilting trends (see Fig. <ref>e). Subsequent charge analysis of all cations (see Supplementary Section 1.2) in the orthorhombic LnTiO_3 and LnMnO_3 series show that europium and ytterbium ions are in the +2 oxidation state, forming divalent-tetravalent perovskites (A^2+B^4+O_3), whereas the remaining compounds in the series are trivalent perovskites (A^3+B^3+O_3). Typically, assigning oxidation states in large computational screening datasets is a complicated task, but our analysis shows that the structural trend obtained from COs is a convenient indicator for changes in the oxidation state of cations. Classification of octahedral connectivity in hybrid iodoplumbates. Small organic molecules have been traditionally used to template the ON <cit.>, resulting in a wide range of modular (or recombination) structures <cit.> with a multimeric ON as the inorganic framework. A multimeric ON contains more than a single CO in its motif. Besides complex oxides <cit.>, hybrid metalates <cit.> such as iodoplumbates are prominent examples. Existing terminologies used to describe the ON connectivity pattern tend to be unstructured. For example, purely corner-sharing network motifs are classified by (i) the cleaving plane index <cit.>, (ii) homologous series <cit.>, which contain stacked octahedral layers of varying thickness, and (iii) corrugated network <cit.>, which contains W-shaped octahedral sequences with mixed cis and trans neighbor configurations. These classification conventions are mostly derived from the crystallographic shear structures <cit.>. Moreover, the umbrella term perovskitoid from the halide perovskite community <cit.> was originally used to denote 1D structures with face-sharing octahedra, but now generally refers to the extended family of hybrid metalates without purely corner-sharing ONs. In general, these attempts lack specificity in capturing the variety of low-dimensional structures and are incompatible with high-throughput computational screening. Previous attempts at linking stoichiometry to dimensionality <cit.> showed limited success in small case studies <cit.>, but became less effective with the increasing complexity of the materials design space. In our view, the limitation is due to the restricted feature space and problem formulation (i.e. predicting dimensionality using stoichiometry). A taxonomy should also distinguish the topological aspect of the structural components <cit.>, which is equivalent to polymorph (or polytype) identification. Our procedure builds on these requirements for motif assignment in hybrid metalates, which aims to distinguish the inorganic framework polytypes, regardless of the organic constituent. To this end, we analyzed ∼ 970 compounds from the Cambridge Structural Database that are hybrid iodoplumbates (see Methods), A_xPb_yI_z, where A is an organic cation, and the [Pb_yI_z]^n- polynuclear anion induces polytypism. The hybrid iodoplumbates are the most studied members of the hybrid metalate materials family that also includes hybrid perovskites <cit.>. We first explored the stoichiometric and topological diversity of A_xPb_yI_z compounds using our computational workflow. In Fig. <ref>a, the stoichiometric ratio between lead (Pb) and iodine (I) in the empirical formula is on the order of 3:1, yielding an average formula of (APbI_3)_y. As the number of Pb atoms in the formula unit increases so does the scatter along the magenta dashed line, indicating stoichiometries that more frequently deviate from this 3:1 ratio due to the growing complexity of network connectivity. When representing the polynuclear anions ([Pb_yI_z]^n-, with n being the charge it bears) by 3D meshes, we can calculate the relationship between the number of COs and the Euler characteristic <cit.> of the mesh graph representing the ONs. The results in Fig. <ref>b lie within a region bounded by the non-sharing limit, where isolated octahedra (i.e. 0D hybrid perovskites <cit.>) exist. As the number of COs increase, the Euler characteristic generally decreases in value, indicating a departure from a convex shape. Details of the counting procedure are provided in Methods. The connectivity analysis reveals that a significant proportion of existing materials contain only face-sharing octahedra, while some also contain corner-sharing octahedra co-existing with face- or edge-sharing octahedra in the same structure. A detailed statistical summary of our results following polytype identification (see Fig. <ref>c and Extended Data Fig. <ref>) shows that in hybrid iodoplumbates, the nearest-neighbor connectivity pattern of COs occurs with the trend, corner-sharing PbI_6 > face-sharing PbI_6 ≫edge-sharing PbI_6. In terms of probability of connectivity (P), we can equivalently write P_PbI_6^CS > P_PbI_6^FS≫ P_PbI_6^ES. This observation violates Pauling's third rule <cit.> for structure prediction of ionic compounds, which orders corner-sharing (CS) as the most likely connectivity type between neighboring coordination environments, followed by edge-sharing (ES), and then face-sharing (FS). The violation leads to a modified preference in connectivity, which have been observed in layered oxides <cit.>, oxides under non-ambient conditions <cit.>, and oxides with large anion size and high coordination number <cit.>. However, for halides, no systematic study is yet present. From a structural prediction perspective, Pauling's third rule still provides a good estimate for the dominant connectivity type between coordination environments, which is CS in our case for a homogeneously connected coordination network (i.e. containing only a single connectivity type). For those with two distinct connectivity types, our summary shows that the polytype with coexisting CS and FS PbI_6 octahedra, the two main connectivity types in Eq. (<ref>), is the most common, followed by the other two possibilities, corner- & face-sharing PbI_6 ≫corner- & edge-sharing PbI_6 ∼face- & edge-sharing PbI_6. This may also be written more compactly in the probability of connectivity as P_PbI_6^CS, FS≫ P_PbI_6^CS, ES≈ P_PbI_6^FS, ES, and P_PbI_6^CS, FS∝ P_PbI_6^CSP_PbI_6^FS, similarly for the other two probabilities for mixed connectivity. Eq. (<ref>) may be regarded as the second-order extension of Pauling's third rule <cit.> for the specific type of coordination environment encountered here. Using the stoichiometric and topological information of the Pb_yI_z motif, we construct the CNE scheme, a vector representation <cit.> of the ON (see Methods). We then perform dimensionality reduction and clustering interactively to identify the distribution of polytypes, as shown in Fig. <ref>a. Manifold learning (see Methods) is used here for dimensionality reduction because it provides an interpretable (due to well-separated clusters) visualization <cit.> of the underlying structure of the dataset to facilitate the subsequent clustering. We validate the clusters derived from the algorithms using both external and internal metrics along with polytype labels generated by selective visualization and human refinement. The approach uncovers the underlying clusters within the dataset <cit.>, which in our case correspond to structures with a shared ON motif. Details of the procedure are provided in Methods and Supplementary Section 2. We rank the framework polytypes according to their occurrence in the dataset, resulting in a power-law distribution (see Extended Data Fig. <ref>a), and the top twenty entries (out of ∼ 70) are listed in Fig. <ref>b. Some of the polytypes are visualized using a selected example in Fig. <ref>c. We find that a number of clusters already show high purity in Fig. <ref>a, where homogeneous clusters tend to have a more globular shape than heterogeneous ones. The shorthand notations in <ref>a correspond to the indices in Fig. <ref>c and its potential variants. For example, c_4(1) indicates CS 1-layer polytype, or single-layer 2D hybrid perovskite, c_11(2) indicates having the FS chain polytype with two independent chains per unit cell, 3Dp represents 3D CS hybrid perovskites. Clusters with two major polytypes are indicated with a slash, such as c_2/c_5. A complete description of the naming rules for the polytypes is provided in the Supplementary Information and more examples are given in Supplementary Table 5. Our observation is that the major clusters represent common network motifs: CS n-layers (of different layer thickness n), FS chains (with different numbers of polymeric chains per unit cell), 0D and 3D hybrid perovskite structures. Some of the lesser-known polytypes are also aggregated into high-purity clusters, including those with mixed connectivity such as ES chain of FS 3-mers (Fig. <ref>c_10), CS layer of FS 3-mers (Fig. <ref>c_9), and CS chain of ES 2-mer of FS 2-mers (Fig. <ref>c_7), which has a 1D arrangement of COs similar to the columnar substructure of the V_2O_3 corundum (see Extended Data Fig. <ref>). Although the mixed connectivity in this polytype due to the [Pb_2I_6]^2- anion is in seeming violation of Pauling's third rule, it's surprisingly more common than many other framework polytypes found in the A_xPb_yI_z dataset. A caveat to mention is that the clusters in Fig. <ref>c with the same label generally differ in the packing of the organic cation and its charge (n/x for [A_x]^n+), which is not considered explicitly in polytype identification but emerges due to the correlation between the characteristics of the organic cation and the inorganic framework it supports. Chemically speaking, the organic packing is a part of the structural motifs of these materials but are not the network motif defined by the PbI_6 COs. Moreover, we compare the effectiveness of the CNE with an established atom-centered representation, the smooth overlap of atomic positions (SOAP) <cit.>, a kernel-based symmetry-invariant representation that prioritizes on distance information, which has been used to visualize materials design spaces <cit.>. The results from our controlled numerical experiments (see Supplementary Fig. <ref> and Supplementary Table <ref>) show that, because of the scale dependence of the kernel, clusters from the SOAP representation are generally less pure than those obtained from the CNE scheme, and exhibit more elongated shapes, which are less beneficial for the subsequent polytype identification. The polytype occurrence ranking in in Fig. <ref>b reveals more information supplementary to the connectivity trends discussed in Eqs. (<ref>)-(<ref>). By cross-examination, we find a scarcity of structures with 2D layers entirely composed of edge-sharing PbI_6 octahedra (only 3 out of ∼ 970). In contrast, the 2D edge-sharing motif is the majority polytype of PbI_2 <cit.>, a parent compound of A_xPb_yI_z, as well as the structure of the “analogous perovskite” compound BiI_3 (in the sense that ReO_3 is structurally analogous to ABO_3) <cit.> that has been reported <cit.>. This is likely due to the balance of charges since infinite edge-sharing PbI_6 octahedral layers are charge neutral and would require vacancies to bear charges <cit.> and incorporate A-site cations. In fact, the more common types of ONs in A_xPb_yI_z with edge-sharing connectivity are overwhelmingly 1D, featuring a short secondary unit, such as an edge-sharing 2-mer or 3-mer, stacked up through corner-sharing or edge-sharing (e.g. Fig. <ref>c_6). Impressively, the 1D multimeric polytypes with a single connectivity type (i.e. only containing CS, ES, or FS octahedra), traditionally found in materials with distinct oxidation states <cit.>, are all present within this single family of materials via pairing with different organic cations, indicating the tremendous framework variability of hybrid materials. These observations suggest that polyatomic organic cations can stabilize more diverse ONs than those typically found in the (parent) inorganic materials, where only monatomic cations exist. In addition to rationalizing the observed trends, looking across polytypes with similar building units help identify design principles relating to (i) dimensionality tuning by changing connectivity, and (ii) building unit augmentation. For principle (i), we find that the dimensionality may be tuned by the octahedral connectivity within the inorganic framework. This enables the use of ES or FS multimers to construct, in principle, ONs of any dimension (0D-3D), expanding the perovskite motif with single CS octahedra. Although dimensional tuning has been discussed in the context of purely inorganic materials <cit.>, for hybrid metalates such as iodoplumbates, the vast space of admissible organic cations suggests the possibility of fine-tuning the material properties by changing the inorganic framework. For principle (ii), we find that accommodation of the unit network to different organic cations may also be achieved by augmenting the secondary unit, such as from 1-mers or 2-mers to 6-mers, which are all found in the hybrid iodoplumbate dataset. These principles are further discussed and illustrated with examples in Supplementary Figs. <ref>-<ref>. § DISCUSSION From a computational perspective, our approach marries chemical intuition with geometry processing and the landscape of machine learning methods using object-centric representations <cit.>. The objects here refer to the atom-centered COs present in the source data, rather than the atoms reported from simulation or experiment. Because of this object focus, the outcome is highly interpretable. The present work motivates further developments in preprocessing methods that are compatible with specific, object-valued data. For example, existing approaches often require a symmetry-checking stage <cit.> that becomes computationally expensive for large datasets of complex structures. This necessitates simpler procedures followed by data cleaning using chemical intuition, which produces noisier data (e.g. with missed or inaccurately identified coordination environments). As such, the scalability of our approach is heavily dependent upon the efficiency and accuracy of coordination environment detection techniques. From a materials science perspective, our oxide and iodoplumbate results deserve further contemplation with regard to their published energetics and related material properties. Dimensional confinement effects <cit.>, which stem from changes in octahedral connectivity, significantly modify the electronic bandgap and optoelectronic properties in hybrid iodoplumbates and related ternary halides <cit.>. In halide compositions, for instance, purely ES connectivity increases the bandgap by ∼ 0.5 eV over purely CS connectivity, and FS increases the bandgap by another ∼ 0.5 eV <cit.>. This is comparable to the bandgap broadening observed from compositional tuning, i.e. by switching from iodide to bromide anions (∼ 0.5 eV) and bromide to chloride anions (∼ 0.6 eV) <cit.>. Both effects are related to the reduced overlap of the atomic orbitals due to the lower dimensionality or the smaller anions. Moreover, the electronic dimensionality affects the photoluminescence properties of iodoplumbates. ES and FS motifs can lead to charge localization which can enhance the photoluminescence quantum yield up to 100% <cit.>. On the contrary, CS and mixed-connectivity motifs can be beneficial for charge transport and modified emissions (e.g. broad emission <cit.>). For multinary hybrid metalates, the dimensionality and connectivity of their octahedral networks can also be altered by their synthesis conditions <cit.>. The diversity of ON motif catalogued in the present work motivates a thorough investigation into the relationship between connectivity patterns and materials properties <cit.> with band structures computed with density functional theory (DFT). We envision a computational study whereby the functionalities could be tuned for a fixed ON, with pairing moieties that are optimized in an approach that resembles the docking mechanism in protein chemistry. Insights from such a study would buttress targeted screenings for specific network connectivity patterns, and likely reduce the resource cost and attrition rate from random search-based screening methods <cit.>. Overall, our results reaffirm the necessity to design task-dependent representations for machine learning-assisted exploration of the materials design space <cit.> and the need to explicitly incorporate connectivity information in existing distance-based representations <cit.>. The representation from CNE may be merged with those for organic molecules to build interpretable predictors and generative models of hybrid functional material properties. The modified Pauling's third rule in the materials examined here and by others <cit.> motivates its reformulation using a fully probabilistic approach that reflects chemical specificity and serves growing data. Our approach may also be adapted to extract insights from other analogous, yet less investigated materials families featuring octahedral networks, such as inorganic and hybrid metalates with nonmetal or metalloid ligands, high-entropy materials <cit.>, coordination polymers, and molecular perovskites <cit.>, which have even more complex design spaces and highly tunable structure-property landscapes. §.§ Methods Materials datasets and their curation. The oxide perovskite (ABO_3) polymorph dataset was obtained from high-throughput computational screening of compositionally valid compounds. Although details of the data generation have been reported elsewhere <cit.>, we provide here a brief summary of the computational method: The structures of the oxide perovskites were generated using the software SPuDS <cit.>, which accounts for electrostatic structural distortions using bond-valence theory. The SPuDS-generated structures were further optimized using density functional theory within the Vienna Ab-initio Simulation Program (VASP 5.4.1) and under periodic boundary conditions utilizing projector augmented wave (PAW) pseudopotentials and the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) exchange-correlation functional <cit.>. Electronic self-consistent field energies were converged to within 10^-6 eV and forces were converged to within 0.01 eV/Å. The dataset is represented in the -style json format, and a custom parser code (https://github.com/Sutton-Research-Lab/parsettahttps://github.com/Sutton-Research-Lab/parsetta) was developed for querying subsets and filtering. The structure data for hybrid iodoplumbates (A_xPb_yI_z) were obtained from the Cambridge Structural Database (CSD) <cit.>, accumulated until early 2022. We directly queried the CSD via a chemical search of lead and iodine compounds, which yielded over 1020 entries. We next removed entries that were not hybrid iodoplumbates, including (i) structures that did not contain organic components (removed by checking the chemical species beyond Pb and I in the compound) and (ii) structures with lead coordination environments other than octahedral (removed during the initial coordination environment detection). We also kept only the compounds with a Pb stoichiometry of up to 6 in the chemical formula. The final number of stoichiometrically-ordered data entries was 971, which includes fewer than 6 entries with duplicated IDs in the CSD. These duplicates were kept because of their different measurement conditions. The majority of processing steps were carried out using , which implements the workflow in Fig. <ref> for the analysis of coordination networks. We adopted a chemical environment parser <cit.> to analyze the structures by iteratively identifying the space groups of possible coordination environments centered locally on every atom. For the examples used in this work, the analysis was performed for the metal-centered coordination environments, i.e. on lead for the iodoplumbates, to speed up the process. These groupings are coordination octahedra and we represent them as base objects in . Subsequent algebraic operations between COs described in the following two subsections are implemented for either the atomic level (mesh graph) or the CO-level (inter-unit graph). The characteristics and diversity of the dataset are illustrated in Extended Data Fig. <ref>. Octahedral axial tilt calculation. A convex octahedron (O) comprises a convex set of points in ℝ^3 as the vertices, while the pairs and trios of adjacent vertices form the edges and faces, respectively. The vertices, edges and faces are also known as the elements of the polyhedron. O = {s_i}_i=6 = { s_1, ..., s_6 } Each point s_i is determined by its spatial coordinates (x_i, y_i, z_i). In computational geometry, convex octahedra are convex hulls of their vertices. Existing fast algorithms for convex hull construction can determine the edges and faces of the octahedron 3D point set <cit.> and can calculate the octahedral volume without specifically ordering the vertices. V(O) = V(Conv(O)) To order the vertices according to the right-hand spiral convention, we first define the spiral axis unit vector 𝐞_j (j = x, y, z) and order the atoms according to their projection onto the spiral axis. This separates the coordination atoms into apical and basal atoms. Then, we order the basal atoms with respect to the basal plane origin in right-handed (or left-handed) fashion accordingly. The ordering of the vertices of a graph is also called graph canonization <cit.>. The general steps of the tilting angle calculation are provided as follows, starting with coordinates of the unordered vertices and a set of axis vectors, (i) Calculate the projection of the coordinates of each vertex onto a given axis vector. (ii) Rank the projected coordinates of the vertices along the axis vector. (iii) Order the vertices by their ranking: the topmost and the lowermost vertices are the apical atoms, the remaining become the basal atoms. (iv) Check the vertex ordering by comparing with octahedral diagonals. (v) Calculate the apical vector, which connects the apical atoms. (vi) Calculate the tilting angle using the dot product between the apical vector and the given axis vector. (vii) Iterate over steps (i) - (vi) for the other axis vectors. Going from unordered to ordered vertices requires to determine a permutation matrix for the vertex ordering vector. Although simple to implement and fast, the projection-based method described above is subject to errors in structured with oblique lattices or with large tilting angles (e.g. > 30^∘). We introduces an ordering check as step (iv) based on the assumption that the three longest line segments connecting any two vertices are the diagonals (such as the vertex pairs {1, 6}, {2, 4}, {3, 5} in Fig. <ref>b) of the octahedron. This assumption is true if the octahedron is not extremely distorted, which is satisfied for the materials family studied here. Note that for a given orientation (defined by an axis vector), only one of the octahedral diagonals overlaps with the apical vector, unless two of diagonals are symmetrically equivalent with respect to the given orientation. The cases that fail the check are subject to one more permutation for correction. Another caveat to mention is that the ordering of the basal atoms contains rotational ambiguity (i.e. can be shifted clockwise or counterclockwise by the same index, without violating the spiral ordering), but this does not affect the ordering check because the vertices that form the octahedral diagonals in the basal plane always differ by 2 in their ordering. Octahedral network connectivity analysis. Construction of infinite polyhedra was first systematically discussed by Coxeter <cit.>, which requires all constituents to be connected in all dimensions. This would only apply to the 3D networks for those formed by PbI_6 COs or other compositions. The octahedral network we refer to in this work may also be finite in one to three dimensions, which is a special case of a polyhedral complex <cit.>. For connectivity analysis, the center atom is ignored. The network analysis proceeds through three steps: (i) Individual coordination octahedra (COs); (ii) Adjacency matrix of the octahedral network inter-unit graph; (iii) Properties of the mesh graph and the inter-unit graph. We define the inter-unit graph G_I(V_I, E_I) as the graph where each coordination environment is represented by a node and their the number of shared atoms between environments as edge weight <cit.>. The mesh graph G_M(V_M, E_M) forms the collective boundary of the octahedral network. It has a similar meaning in computer graphics when discussing the triangulated mesh of an object <cit.>. Geometric combinatorics. The analysis starts from the COs detected by a 3D Voronoi partitioning algorithm <cit.>. In , we also provide a faster version using only the nearest neighbors. We consider the network formed by bringing together individual octahedra (O_i, i=1, 2, ..., N_CO) with various connectivity types, a process called gluing in topology <cit.>. The ON (in the quotient space formed by gluing) may be expressed as, S^ON = ⋃_i=1^N_COO_i. Although elaborate gluing schemes exist for polyhedra <cit.>, here we formulate gluing as the real-space intersection(s) of two or more octahedra. Its chemical validity is grounded on the layer-stacking characteristics of crystal growth <cit.>. Reconstruction of the network is achieved through combinatorial query of the spatial relationship between the COs, which recovers the adjacency matrix of the inter-unit graph (A^IG). We define the connectivity of glued COs using the intersection of the individual octahedra. The intersection Δ_12 between octahedra O_1 and O_2 is Δ_12 = O_1 ∩ O_2 = {s_i|s_i ∈ O_1, s_i ∈ O_2, i=1, 2, ..., N_CO}. Two octahedra are corner-sharing if they share only one vertex and that their convex hulls have no spatial overlap (non-overlapping condition) except at the shared vertex (the cardinality of Δ_12 is Card(Δ_12) = 1). Similarly, octahedra O_1 and O_2 are edge-sharing if and only if they share only one edge (or two adjacent vertices, Card(Δ_12) = 2) and their convex hulls have no other overlap. The face-sharing criterion requires the existence of three adjacent, non-collinear vertices (Card(Δ_12) = 3) mutual to the two adjoining octahedra. For scenarios where octahedral metal centers have fractional site occupancies of two or more atoms, only the atom with the highest fractional occupancy is used to avoid overcounting and/or miscounting polyhedrons. These definitions are summarized into the following equation, A_i, j^IG = 0, if O_i and O_j are non-sharing, 1, if O_i and O_j are corner-sharing, 2, if O_i and O_j are edge-sharing, 3, if O_i and O_j are face-sharing. The matrix A^IG is symmetric (A_i, j^IG = A_j, i^IG) and generally exhibits a block structure. The limiting cases are the 0D structures <cit.>, where all entries are zero, and the 3D structures, where the blocks are highly overlapping. To calculate the mesh graph (MG) characteristics of the ON, we used the information gathered from the entries of A^IG. In our problem, the MG contains only closed surface mesh. A general treatment of the counting problem may be solved with a simple heuristic: the sharing of each adjacent pair of iodine (for iodoplumbates) atoms amounts to an edge-deletion operation. Similarly, a face-deletion operation amounts to sharing of an adjacent iodine trio, etc. Then, we used the following formulas to calculate the numbers of unique vertices (N_V^MG), edges (N_E^MG) and faces (N_F^MG), respectively. N_V^MG = 6N_CO - 3N_3^IG - 2N_2^IG - N_1^IG N_E^MG = 12N_CO - 3N_3^IG - N_2^IG N_F^MG = 8N_CO - 2N_3^IG The terms N_3^IG, N_2^IG, and N_1^IG are, respectively, the number of 3-connected (face-sharing), 2-connected (edge-sharing), and 1-connected (vertex-sharing) entries in the triangular submatrix of A^IG due to its symmetry. Eqs (<ref>)-(<ref>) may be derived using the intuitive definition of gluing in Eq. (<ref>). The initially isolated N_CO convex octahedra, each having 6 vertices, 12 edges, and 8 faces, are glued together. Each instance of face-, edge- and corner-sharing reduces the number of vertices by 3, 2, and 1, respectively, as in Eq. (<ref>). Each instance of face- and edge-sharing reduces the number of edges by 3 and 1, respectively, as in Eq. (<ref>). On the contrary, the total number of faces is only affected by the number face-sharing of octahedra, as in Eq. (<ref>). This counting method is generalizable to higher-order adjacency tensors <cit.> when more than two coordination octahedra are adjoined at a single vertex. This results in a general counting rule for the elements of the complex (i.e. mesh graph representation of the octahedral network) which may be written as a telescopic sum as follows, N_X^MG = a_XN_CO + ∑_i=1^3 (- b_X, iN_i^(2) + c_X, iN_i^(3) - ...) (X = V, E, F). The parenthesized numbers in the superscript of N indicate the body order of connectivity considered, with ordinary graphs (or “2-graphs”, corresponding to the IG in this example, or N_i^(2) N_i^IG) having two-body connectivity, first-order hypergraphs (or “3-graphs”) having three-body connectivity, etc. The summation index i in Eq. (<ref>) has the same meaning as in Eqs. (<ref>)-(<ref>). Deriving Eq. (<ref>) requires measure theory and the general form of the inclusion-exclusion principle <cit.>, which we leave for future work. We focus here on the truncated version used in this work up to N^(2). Euler characteristic. We calculate the Euler characteristic <cit.> for the MG according to, χ^MG = N_V^MG - N_E^MG + N_F^MG where N_V, N_E, and N_F are the counts of unique vertices, edges and faces of the mesh graph, respectively. Assuming only two-body connectivity and substituting Eqs. (<ref>)-(<ref>) into Eq. (<ref>), we obtain the Euler characteristic, χ^MG = 2N_CO - 2N_3^IG - N_2^IG - N_1^IG. Since the N_i^IG (i = 1, 2, 3) terms are nonnegative integers, the right-hand side of Eq. (<ref>) may be used to establish bounds and relations for χ^MG, so we write χ^MG (N_i^IG). Firstly, we consider the cases when the ON can be partitioned into n unconnected convex objects. This requires that χ^MG = 2n for n ∈ℕ, 0 < n ⩽ N_CO. One limiting case is when n=N_CO, where the ON may be partitioned into convex objects only when N_3^IG = N_2^IG = N_1^IG = 0, and χ^MG is reduced to 2N_CO. This leads to the upper bound, sup_N_i^IG⩾ 0, N_i^IG∈ℕχ^MG (N_i^IG)/N_CO = 2, which is the non-sharing limit specified in Fig. <ref>b. This is the scenario with completely isolated octahedra (also known as zero-dimensional structures <cit.>). Technically speaking, in this limiting case, the quotient space of ON in Eq. (<ref>) becomes a disjoint union of the subspaces containing individual octahedra. Secondly, the space-filling 3D framework completely made of corner-sharing octahedra is another special case. In this case, the term N_3^IG = N_2^IG = 0. If we ignore the unit-cell boundary, each octahedron is connected to 6 others, resulting in N_3^IG∼ 3N_CO, which yields χ^MG / N_CO∼ -1. This limiting case is not attained because of the existence of unit-cell boundaries, but it represents one of the most connected ONs. Moreover, the limiting cases of octahedral networks with any other dimensionality and modularity may be related to χ^MG using a similar type of analysis. Unsupervised polytype identification. The human-assisted unsupervised machine learning approach combines the outcome of dimensionality reduction and clustering with human label refinement. For modular materials composed of connected coordination polyhedra <cit.>, the “natural groupings” from chemical intuition should possess similar types of connectivity, which was our initial assumption. Several major polytypes exist for hybrid iodoplumbates <cit.>, while some rare polytypes exist with very few or even just one example. Identifying the characteristic structural groupings of the hybrid iodoplumbates is carried out using dimensionality reduction and clustering. Here, we combined Uniform Manifold Approximation and Projection (UMAP) <cit.> with common-nearest-neighbor (CommonNN) clustering <cit.> to identify the characteristic structural groupings of the hybrid iodoplumbates. Coordination network encoding. The representation obtained from the CNE scheme includes the stoichiometries and the octahedral network parameters described previously. For each material, we construct the feature vector for the coordination network as a six-tuple, ℳ = (y, z, N_CO, N_V^MG, N_E^MG, N_F^MG), where y and z are the stoichiometries of Pb and I in the chemical formula A_xPb_yI_z, N_CO is the number of COs detected in the structure. The parameters of the mesh graph in Eq. (<ref>) are calculated using the truncated expressions as in Eqs. (<ref>)-(<ref>). The feature variability comes from the different definitions of the crystal structure, including (i) the space group of the unit cell, (ii) the number of formula units within the unit cell, and (iii) the boundary of the unit cell. These factors influence all terms in the feature vector. A full list of features for the A_xPb_yI_z compounds is provided in Supplementary Table 6. Dimensionality reduction. UMAP is a nonlinear dimensionality reduction algorithm, which minimizes the fuzzy set cross-entropy (L_UMAP) between the input data in the high-dimensional manifold (e.g. the p values) and its counterpart in the low-dimensional target embedding (e.g. the q values). L_UMAP = ∑_i ≠ j(p_ijlog(p_ij/q_ij)+(1-p_ij) log(1-p_ij/1-q_ij)) The i and j are the data indices, which in our case refer to the indices of distinct materials. The L_UMAP term is minimized via stochastic gradient descent to obtain the optimal embedding <cit.> given a hyperparameter setting. We employed the implementation in (https://github.com/lmcinnes/umaphttps://github.com/lmcinnes/umap) with a Euclidean distance between data points in the input feature space specified by Eq. (<ref>). Interactive clustering. CommonNN is a density-based clustering algorithm that can effectively recognize clusters with a more irregular shape (i.e. deviating from globular) <cit.>. We used its implementation in (https://github.com/scikit-learn-contrib/scikit-learn-extrahttps://github.com/scikit-learn-contrib/scikit-learn-extra), and tuned its clustering hyperparameters via grid search by maximizing the silhouette score <cit.>. Our procedure began with a mix of homogeneous and heterogeneous clusters that were progressively refined and separated based upon their distinguishable features. (see Supplementary Table <ref>). Representative members of each cluster were then visually inspected by a human expert with a crystallographic software (Vesta 3 <cit.>) to produce the polytype labels according to the naming rules described in the Supplementary Section 2.3. We assessed the quality of the initial clustering using a set of internal and external cluster validation metrics <cit.> and report the results in Supplementary Table <ref>. A list of procedures is also provided in Supplementary Table <ref>. §.§ Acknowledgments We thank G. Csanyi (University of Cambridge) and H.-C. zur Loye (University of South Carolina) for helpful discussions. The work was supported by the Air Force Research Laboratory through the research grant (FA8655-21-1-7010). CBM and RJM were supported by the U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), Hydrogen and Fuel Cell Technologies Office (HFTO), and specifically the HydroGEN Advanced Water Splitting Materials Consortium, established as part of the Energy Materials Network under this same office (award DE-EE0008088). CBM and RJM also acknowledge support from the National Science Foundation (awards NSF CHEM-1800592 and CBET-2016225). C.S. acknowledges start-up support from the University of South Carolina. §.§ Data Availability The single oxide perovskite polymorph dataset is available on GitHub (https://github.com/rymo1354/gii_minimization/tree/main/structures_and_energieshttps://github.com/rymo1354/gii_minimization/tree/main/structures_and_energies). The hybrid iodoplumbate dataset, including the calculated features and labels, is provided as supplementary files. §.§ Code Availability The code for the analysis of octahedral networks is accessible on GitHub (https://github.com/RealPolitiX/crystmorphhttps://github.com/RealPolitiX/crystmorph). The final version will be made public upon acceptance of the manuscript. 1.5 §.§ Author contributions R.P.X. conceived and coordinated the project, developed the theory and code for the quantification of octahedral networks and carried out the analysis. R.J.M. provided the oxide perovskite dataset and conducted the charge analysis under the supervision of C.B.M.. I.H. provided support on the crystal chemistry and nomenclature of hybrid iodoplumbates. C.S. and C.B.M. provided support on the problem formulation and interpretation of results. R.P.X. wrote the initial draft with inputs from R.J.M. and I.H.. All authors contributed to discussion and revision of the manuscript to its final form. §.§ Competing interests The authors declare no competing interests in the content of the article. 30pt30ptSupplementary Information 1.2 30pt30ptFrom structure mining to unsupervised exploration of atomic octahedral networks § OCTAHEDRAL TILTING TREND AND OUTLIER DISCOVERY §.§ Tilting distributions in oxide perovskite polymorphs Using the same procedure as demonstrated in the main text Fig. 3, we analyzed the other perovskite tilt classes, the results are shown in Supplementary Fig. <ref> for tilt classes a^+a^+a^+ (#3), a^+a^+c^- (#5) and a^-b^-b^- (#13), in Supplementary Fig. <ref> for tilt classes a^-a^-a^- (#14), a^0b^+b^+ (#16) and a^0b^+c^- (#17), and in Supplementary Fig. <ref> for tilt classes a^0b^-b^- (#20), a^0a^0c^+ (#21) and a^0a^0c^- (#22). The index number of the tilt class <cit.> is in parenthesis. Most of the computationally optimized atomic structures of ABO_3 in the dataset show a unique set of tilting angles (θ_x, θ_y, θ_z), such as tilt classes #3, #5, #10, #16, #17, #21-#23. Others show two or more sets, largely due to the oblique lattice system and the imperfect symmetry constraints. For presentation, the calculated tilting angles for each material are averaged along each axis. Some of the structures in the geometry optimization end up in a different phase are also reflected in the axis-resolved tilting angles, such as the those with angles less than 20^∘ and larger than 0^∘ in Supplementary Fig. <ref>b. §.§ Oxidation state analysis We assigned oxidation states using the DFT+U onsite magnetic moment(s) rather than through Bader charge analysis <cit.> due to charge self-regulation that arises in the 3d transition metals <cit.>, whereby switching oxidation states negligibly changes the net charge(s) contained within the zero flux surfaces of the electron charge density. The onsite magnetization is therefore a better indicator of 3d transition metal (e.g., Mn and Ti) oxidation states in ABO_3 perovskites than the Bader charge. The oxidation states of the orthorhombic lanthanide manganite (LnMnO_3) perovskites and lanthanide titanate (LnTiO_3) perovskites with space group Pnma were deduced from the DFT-computed d-orbital magnetic moments of Mn and Ti sites, which occupy the B-sites of these perovskites. The formal oxidation states of the lanthanide A-sites (Ln), Mn or Ti B-sites, and oxygen (O) X-sites of each composition were constrained to be charge-balanced, such that they summed to zero. Tables S1 and S2 order the perovskite compositions by Ln atomic number, and tabulate the d-orbital magnetic moments of Mn and Ti, as well as the f-orbital magnetic moments of the A-site lanthanides. In the manganite perovksite series, Mn^3+ has the electronic configuration [Ar]3d^4 and Mn^4+ has the electronic configuration [Ar]3d^3. Mn with DFT-computed magnetic moments that approach 4 are most likely in the +3 oxidation state (high-spin), whereas Mn with magnetic moments that approach 3 are most likely in the +4 oxidation state (high-spin). For compositions with Ln = La, Pr, Nd, Sm, Gd, Tb, Dy, Ho, Er, and Lu, the magnitudes of Mn magnetic moments are greater than or equal to 3.818, suggesting that Mn is in the +3 oxidation state; to ensure charge balance, these lanthanides must also adopt the +3 oxidation state. The magnetic moment of Mn in YbMnO_3 is 3.065, suggesting that Mn adopts the +4 oxidation state in this composition, and that Yb must therefore adopt the +2 oxidation state to ensure charge balance. The magnetic moment of Mn in the EuMnO_3 composition is 3.677, suggesting it is between the +3 and +4 oxidation states. Accordingly, Eu (which has electronic configurations of [Xe]4f^66s^2 and [Xe]4f^76s^2 in the +3 and +2 oxidation states, respectively) has a magnetic moment of 6.392, or between the expected magnetizations of 6 for Eu^3+ (high-spin) and 7 for Eu^2+ (high-spin). In the titanate perovksite series, Ti^3+ has the electronic configuration [Ar]3d^1 and Ti^4+ has the electronic configuration [Ar]. Ti with DFT-computed magnetic moments that approach 1 are most likely in the +3 oxidation state, whereas Mn with magnetic moments that approach 0 are most likely in the +4 oxidation state. For compositions with Ln = La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Ho, Er, Tm and Lu, the magnitudes of Ti magnetic moments are greater than or equal to 0.727, suggesting that Ti is in the +3 oxidation state; to ensure charge balance, these lanthanides must also adopt the +3 oxidation state. The magnetic moment of Mn in YbMnO_3 is 3.065, suggesting that Mn adopts the +4 oxidation state in this composition, and that Yb must therefore adopt the +2 oxidation state to ensure charge balance. The magnetic moments of Ti in the EuTiO_3 and YbTiO_3 compositions are 0, which suggests that Ti is in the +4 state in these compositions. Yb and Eu must therefore adopt the +2 oxidation states for these compositions to ensure charge balance. § UNSUPERVISED LEARNING OF OCTAHEDRAL NETWORK MOTIFS §.§ Comparison of representation and machine learning methods The entire data processing pipeline for polytype identification discussed in the Main Text is summarized in Supplementary Fig. <ref>. The data featurization step uses the coordination network encoding (CNE) scheme, which yields a scale-invariant representation of the connectivity pattern among coordination environments. To identify hybrid materials by their inorganic framework polytypes, a dimensionality reduction step is required to improve the clustering outcome. We find that, compared with linear dimensionality reduction methods, nonlinear methods result in more separation between compartments of data, such as t-distributed stochastic neighbor embedding (tSNE) <cit.> and uniform manifold approximation and projection (UMAP) <cit.> result in more separable embeddings to enable downstream clustering and discover subpopulations. To check the effectiveness of the CNE in distinguishing between inorganic framework polytypes, we compared its outcome against that of the smooth overlap of atomic positions (SOAP) representation <cit.>, as implemented in <cit.>. The results, shown in Supplementary Fig. <ref>, include three sets of 2D embeddings from the two representations and principal component analysis (PCA), tSNE, and UMAP, respectively. For the numerical experiment, we used the implementation <cit.> of PCA, the parallelized implementation of tSNE in <cit.> (https://github.com/pavlin-policar/openTSNEhttps://github.com/pavlin-policar/openTSNE), which includes the exaggeration technique in optimization <cit.> to increase the distances between groupings of similar data points and thereby improves the clusterability of the low-dimensional embedding. The data clustering used the feature matrix obtained from the corresponding encoding method (CNE and SOAP) and subsequent feature scaling. The original implementation of UMAP in <cit.> was used for the numerical experiment. The low-dimensional embedding obtained from linear dimensionality reduction, such as PCA, has been used previously for visualizing chemical space <cit.>. Yet for grouping and labeling the subpopulations of structures (here being the polytypes of A_xPb_yI_z), its use is limited because of the clutteredness as a result of the linear mapping. The outcome is also sensitive to extreme values, which in our case corresponds to materials with large unit cells. As a result, in Supplementary Fig. <ref>a-b, we only show the central parts of these embeddings. For nonlinear dimensionality reduction methods like tSNE and UMAP, the subpopulations are clearly revealed and significantly easier to be separated by clustering algorithms. Between the two representations compared here, we find that the clusters derived from the CNE are predominantly globular, which requires only simple (e.g. convex, linear) decision boundaries to isolate. Comparatively, the scale dependence and higher dimensionality of the SOAP representation produces a more complex low-dimensional embedding, with elongated cluster shapes, resulting in extended and highly nonlinear boundaries between clusters, which makes the subsequent clustering more challenging. The elongated shapes also indicate the heterogeneity of the clusters, which has been confirmed upon human inspection. The general appearances are similar between the outcomes from tSNE and UMAP starting from the same representation, as shown in Supplementary Fig. <ref>c-f. Quantitative comparisons between the low-dimensional embeddings are presented in the next subsection. §.§ Human label refinement and quality assessment The dimensionality reduction and clustering provides a starting point to facilitate human validation and annotation of the octahedral network connectivity, because, to our knowledge, no tools exist yet for this type of work. For clusters that exhibit apparent substructures in the low-dimensional embedding, we conducted re-clustering to extract the underlying structure. We summarize the steps in three steps, as listed in Supplementary Table <ref>. The quality of unsupervised machine learning (dimensionality reduction and clustering) is essential for the subsequent clustering and label refinement. Human judgement of the clustering outcome requires the preconceived notion of similarity between crystal structures. Here, the cluster members (i.e. materials) generally possess the same kind of connectivity within their inorganic frameworks, regardless of the organic component incorporated into the structure. Although the occurrence of the octahedral network motifs exhibits a power-law distribution when ranked by their occurence in the dataset (see Supplementary Fig. <ref>), a considerable amount of clusters (before refinement) already have high purity. We compare here the clustering quality from different representations (CNE and SOAP) and dimensionality reduction algorithms (PCA, tSNE, and UMAP), using human-refined labels to quantitatively assess the result of the overall computational workflow. The assessment uses different metrics <cit.> as listed in Supplementary Table <ref>. Among the metrics, both the silhouette score <cit.> and the Davies-Bouldin index <cit.> are used for internal validation of the clustering. They are calculated using the features and algorithm-generated cluster labels. They quantify the spatial separation of clusters using intracluster and intercluster distances. For the silhouette score, which is bounded within [-1, 1], the larger the value, the better the clustering quality, whereas for the Davies-Bouldin index, which has a lower bound of 0 and no upper bound, a better clustering results in a smaller value <cit.>. The adjusted Rand index <cit.> is an external validation metric that compares the clustering outcome with a known standard, which in this case is the human-refined polytype labels after refinement. The metric is adjusted for random guessing and is generally bounded within [0, 1], with 0 indicating that the clustering is indistinguishable from a random assignment and 1 meaning that it is identical to the standard, i.e. human expert labels, up to label permutation. A negative value may also be obtained sometimes, which indicates less accuracy than random assignment <cit.>. We verified the 0 baseline with randomly generated cluster labels. The algorithm benchmark results can be compared visually in Supplementary Fig. <ref>, where the first two dimensions of the embeddings are shown. The calculated cluster validation metrics in Supplementary Table <ref> refers to these two dimensions that account for most of the variance in the data. We identify the following characteristics from the results: (1) Starting with CNE generally achieves higher score than the SOAP representation given the same dimensionality reduction and subsequent clustering algorithm. In every case, a positive value for the adjusted Rand index is obtained, indicating that the cluster assignment using unsupervised machine learning algorithms is better than random guessing. (2) For both types of representations, the use of nonlinear dimensionality reduction (e.g. tSNE, UMAP) improves the outcome from that of PCA, which constrains to a linear embedding. The improvement is more significant for the representation from CNE. (3) The difference between choosing tSNE or UMAP for the pipeline (given appropriate parameter settings) is less significant. What's also worth noting is that the CNE + PCA approach already produces diamond-shaped patches in the embedding space, except that the clusters are hard to delineate due to insufficient separation or clutteredness, this is drastically improved using nonlinear dimensionality reduction, which contains tunable parameters that can increase intercluster separation and reduce intracluster distance. The linear embedding obtained from PCA tends to be affected by extreme outliers (some are not shown due to limited space). In this case, these outliers correspond to materials with a large unit cell (therefore a large number of PbI_6 octahedra). §.§ Queryable naming rules for inorganic framework polytypes There exists a few frameworks for the naming of modular (or recombination) materials, largely based on the fragments <cit.>. For 3D frameworks, the Ramsdell notation for polytypes <cit.> are also used. The types of frameworks include isolated units (0D), chains (1D), sheets or layers (2D), and nets or frameworks (3D) <cit.> and these conventions have been developed in the context of purely inorganic binary compounds <cit.>. For hybrid iodoplumbates (and similarly for other halometalates), we can treat the [Pb_yI_z]^n- part as an ionic binary compound, which is the inorganic framework. We propose here a simple set of rules for generating searchable framework names, which may also be queried to gather summary statistics. The naming rules are based on the constituents of the framework and are constructed in a multiscale fashion to reflect the levels of connectivity present in the material structures. We list the rules below with the nickname for each in bold letters. 1. Degree of connectivity. Define the degree of connectivity as the number of shared vertices between two octahedra, which could be 0, 1, 2, or 3, for non-sharing (NS), corner-sharing (CS), edge-sharing (ES), or face-sharing (FS), respectively. The naming starts from the octahedral units with the highest degree of connectivity and proceeds in a decreasing order. 2. Building units. In atomic ONs, the primary (building) units of the structure are the coordination octahedra with atomic ligands, the secondary (building) units are constructed from primary units through gluing operations such as CS, ES, and FS, which forms multimers. Similarly, a tertiary (building) unit can be constructed from gluing secodary units through CS, ES, FS, or a combination of them. 3. Multiplicity of building units. Find the smallest and most connected secondary unit (a multimer) within the octahedral network. In graph-theoretical terms, this amounts to the maximally connected subgraph of the inter-unit graph of the ON. Then, identify the multiplicity of the building units. For a multimeric secondary unit, the multiplicity is the number of octahedra. A multimer (or n-mer) corresponds to a secondary unit with a multiplicity of n (n = 1, 2, ..., ∞). An octahedral polymer or polymeric chain of octahedra is equivalent to an ∞-mer (infinite-mer). Naturally, a dimer, trimer, tetramer, and pentamer, may be referred to as 2-mer, 3-mer, 4-mer, and 5-mer, respectively. 4. Elementary and complex multimers. A multimer can be CS, ES, FS, or a mixture of them, which are called, respectively, CS multimer, ES multimer, FS multimer, or mixed-connectivity multimer. An elementary multimer has only one type of connectivity, while a complex multimer has at least two types of connectivity. 5. Decorated multimers. A multimer can be linear (1D) or, equivalently, flat (2D), zigzag (1D) or, equivalently, corrugated (2D), curved, or circular, which are decorative terms to illustrate the fine features at larger spatial scales (typically requires at least a 3-mer unit to represent). A straight multimer is the default type, so the adjective may be dropped in its name, but for other types, the adjective is necessary. They are the corrugated n-mer, curved n-mer, or circular n-mer, respectively. In a corrugated multimer, the octahedral units are connected consecutively on alternating sides of the previous one. In a curved multimer, the octahedral units are getting progressively closer to the initial one. In a circular multimer, the termial octahedral unit overlaps completely with the initial one. 6. Framework dimensionality. An octahedral framework can be polymeric (1D chain or polymer), layered (2D layer), or scaffolded (3D framework). This level of specification naturally yields the dimensionality of the framework <cit.>. 7. Naming sequence. The name of the inorganic framwork is constructed sequentially from the primary unit to the secondary unit and above. The name is written in the possessive form “U_n of U_n-1 of ... of U_2 of U_1”, with increasing hierarchy from U_1 to U_n. We note here that these above naming rules will not be able to cover all possible combinations of coordination octahedra or ON architectures, as their variability grows exponentially, neither can any of the conventions that have been proposed so far in the crystallography community <cit.>. Nevertheless, these rules are able to handle the vast majority of cases for halometalates (and likely for related metalates) involving ONs and contains a level of generality within its scope. Practically speaking, the rules we have proposed here produce an explicit description in a composable and queryable name for generating a statistical summary of the dataset as well as for machine learning tasks, such as predicting the connectivity types, the framework dimensionality, the multiplicity of building units, etc conditioned on the pairing moieties. These characteristics of the inorganic framework are made explicit in the polytype names. To illustrate the rationale behind these naming rules, we provide two examples in Supplementary Fig. <ref>. More examples are given in Supplementary Table 5. §.§ Diversity and surprises in iodoplumbate octahedral networks The frequency of occurrence of the ONs follows a power-law distribution, as shown in Extended Data Fig. <ref>a, similar to many natural phenomena <cit.>. From interactive clustering of hybrid iodoplumbate dataset described in this work, we obtain the essential statistical summary of the inorganic framework polytypes, as shown in Extended Data Fig. <ref>b-c. These quantitative information and the associated examples allow us to arrive at the following three main observations about the distribution of the inorganic framework. * Pauling's third rule is violated in hybrid iodoplumbates. Pauling's third rule says that the connected coordination polyhedra follows a stability trend where corner-sharing (CS) > edge-sharing (ES) > face-sharing (FS) configurations <cit.>. The statistical summary in Extended Data Fig. <ref> shows that the polytypes of hybrid iodoplumbates with only CS octahedra are the most common, occupying 52.7% of the surveyed structures. However, those with only FS PbI_6 octahedra are significantly more common than those with only ES PbI_6 octahedra, which occupy 28.8% and 7.1% of the surveyed structures, respectively. Moreover, the co-existence of CS and FS PbI_6 octahedra is observed in many structures. Both of these observations are in stark contrast to Pauling's third rule <cit.>. * A continuum exists between 0D and 3D structures by varying network connectivity. For hybrid iodoplumbates and its related families of materials, the dimesionality of the system is determined by the spatial arrangements between the organic and inorganic components (framework dimensionality). Although representative members such as 3D perovskite or 1D face-sharing hexagonal perovskite are common examples, our analysis of the hybrid iodoplumbate dataset indicates that the network connectivity plays an important role in dimensionality determination. An example with FS 3-meric units is found in the Main Text Fig. <ref>. The 3-mer can form 0D structures surrounded by an organc cation. When the 3-mers are connected via a shared I–I edges, the structure can become 1D. When the 3-mers connect via shared iodine vertices, the structure can become 2D with layers of CS FS 3-mers. Another example, illustrated in Supplementary Fig. <ref>, starts with 0D structure with isolated ES 3-mers, then connected through edge-sharing as polymer. The polymers are connected to form layers in 2D and finally, the layers are connected to form 3D structures with the organic filling in the gaps. * Modular design from building unit augmentation. One design strategy for building complex structures is by augmenting the secondary units. In octahedral networks, this amounts to extending the number of connected components. For layered hybrid perovskites, structural augmentation from stacking an increasing number of corner-sharing layers along a defined cleaving plane is well-known <cit.>. This strategy, in principle, doesn't change the dimensionality of the system, but allows band structure tuning <cit.>. We point out here from the large hybrid iodoplumbate dataset, analogous strategies to extend the structural complexity in lower dimensional systems by augmenting the secondary unit. They correspond to the accretional homologous series first defined in inorganic crystallography <cit.>, where the secondary unit grows in size. The difference is that in hybrid materials, a sublattice (here referring to the [Pb_yI_z]^n- framework) forms the homologous series, while the organic cation changes to adapt to the size of the composite anion. We illustrate this point with two example series in Supplementary Fig. <ref>: In the first series, the corner-sharing motif is extended from 1 to 5 while keeping the framework in 1D. In the second series, the face-sharing motif is extended from 2 to 4 while keeping the framework in 0D. 1 [heading=subbibliography, title=References]
http://arxiv.org/abs/2306.03327v1
20230606003951
Superfluid extension of the self-consistent time-dependent band theory for neutron star matter: Anti-entrainment vs. superfluid effects in the slab phase
[ "Kenta Yoshimura", "Kazuyuki Sekizawa" ]
nucl-th
[ "nucl-th", "astro-ph.HE", "cond-mat.quant-gas" ]
[][email protected] Department of Physics, School of Science, Tokyo Institute of Technology, Tokyo 152-8551, Japan [][email protected] Department of Physics, School of Science, Tokyo Institute of Technology, Tokyo 152-8551, Japan Nuclear Physics Division, Center for Computational Sciences, University of Tsukuba, Ibaraki 305-8577, Japan RIKEN Nishina Center, Saitama 351-0198, Japan Background The inner crust of neutron stars consists of a Coulomb lattice of neutron-rich nuclei, immersed in a sea of superfluid neutrons with background relativistic electron gas. A proper quantum mechanical treatment for such a system under a periodic potential is the band theory of solids. The effect of band structure on the effective mass of dripped neutrons, the so-called entrainment effect, is currently in a debatable situation, and it has been highly desired to develop a nuclear band theory taking into account neutron superfluidity in a fully self-consistent manner. Purpose The main purpose of the present work is twofold: 1) to develop a formalism of the time-dependent self-consistent band theory, taking full account of nuclear superfluidity, based on time-dependent density functional theory (TDDFT) extended for superfluid systems, and 2) to quantify the effects of band structure and superfluidity on crustal properties, applying the formalism to the slab phase of nuclear matter in the β equilibrium. Methods The fully self-consistent time-dependent band theory, proposed in a previous work [K. Sekizawa, S. Kobayashi, and M. Matsuo, Phys. Rev. C 105, 045807 (2022)], is extended for superfluid systems. To this end, superfluid TDDFT with a local treatment of pairing, known as time-dependent superfluid local density approximation (TDSLDA), is formulated in the coordinate space with a Skyrme-type energy density functional and the Bloch boundary condition. A real-time method is employed to extract the collective mass of a slab and that of protons, which in turn quantifies conduction neutron number density and the neutron effective mass, i.e., the entrainment effect. Results Static calculations have been performed for a range of baryon (nucleon) number density (n_b=0.04–0.07 fm^-3) under the β-equilibrium condition with and without superfluidity, for various inter-slab spacings. From the results, we find that the system gains energy through the formation of Cooper pairs for all densities examined, which validates the existence of superfluidity in the inner crust of neutron stars. From a dynamic response to an external potential, we extract the collective mass of a slab and that of protons immersed in neutron superfluid. From the results, we find that the collective mass of a slab is substantially reduced by 57.5–82.5% for n_b=0.04–0.07 fm^-3, which corresponds to an enhancement of conduction neutron number density and, thus, to a reduction of the neutron effective mass, which we call the anti-entrainment effect. A comparison of results with and without superfluidity reveals that superfluid effects slightly enhance the anti-entrainment effects for the slab phase of neutron-star matter. We discuss novel phenomena associated with superfluidity, quasiparticle resonances in the inner crust, which are absent in normal systems. Conclusions Our fully self-consistent, microscopic, superfluid band calculations based on (TD)DFT showed that the effective mass of dripped neutrons is reduced by about 20–40% for n_b=0.04–0.07 fm^-3 due to the band structure effects and superfluidity slightly enhances the reduction. Superfluid extension of the self-consistent time-dependent band theory for neutron star matter: Anti-entrainment vs. superfluid effects in the slab phase Kazuyuki Sekizawa June 6, 2023 ========================================================================================================================================================== § INTRODUCTION The band theory of solids, with the aid of Kohn-Sham density functional theory (DFT) <cit.>, has become a standard tool in material science to understand and predict abundant properties of a wide variety of matters. Its time-dependent extension based on time-dependent DFT (TDDFT) <cit.> has also been extensively developed to explore electron dynamics under a strong laser field (see, e.g., Refs. <cit.> and references therein). Recently, the band theory of solids has been applied for studying properties of the inner crust of neutron stars <cit.>, where nuclear bundles form crystalline structures, just in a similar way as terrestrial solids, which coexists with neutron superfluid and relativistic electron gas. Along the line with the previous work <cit.>, this paper aims to develop a fully self-consistent, microscopic framework to describe structure and dynamics of the inner crust of neutron stars, taking fully into account both the band structure and superfluid effects on the same footing. 0 The band theory of solids is not yet a popular approach in the context of neutron-star studies. While a first indication of necessity of such a calculation was made in 1994 <cit.>, realistic band calculations were achieved in 2005 for slab and rod phases <cit.>, followed by three-dimensional (3D) calculations <cit.>. It has been shown that, based on band calculations on top of a nuclear potential obtained with the Thomas-Fermi-type approximation <cit.>, neutron effective mass is increased by factors of 1.02–1.03 and 1.11–1.40 in the slab and rod phases, respectively. Furthermore, 3D calculations showed that band structure effects always enhance neutron effective mass and most strikingly in a certain low-density region neutron effective mass was found to be 10 times or more larger than the bare neutron mass. The increase of neutron effective mass is associated with the Brag scattering off the periodic potential, which immobilize dripped neutrons. The latter effect is called the entrainment effect. The notable change of neutron effective mass turned out to affect various interpretations of astrophysical phenomena of neutron stars such as pulsar glitches <cit.> and thermal as well as crustal properties <cit.>, it has attracted increasing interests over the years. (See, Ref. <cit.>, for a review of band calculations of Chamel et al.and discussion on related topics.) The situation regarding the band structure effects on dripped neutrons in the inner crust of neutron stars is still highly controversial. While the band calculations assume a perfect crystalline structure, disorder of crystal may reduce the band structure effects, as discussed in Ref. <cit.>. Apart from the disorder effects, recently, fully self-consistent, microscopic band calculations based on DFT and its time-dependent extension (TDDFT) have been developed for the slab phase of nuclear matter <cit.>. In Refs. <cit.>, based on both static and dynamic calculations, neutron effective mass was found to be reduced for the slab phase, which is called the anti-entrainment effects <cit.>. The latter observation is contradictory to the results of Ref. <cit.>, possibly due to an improper definition of “free” neutron density for the slab and rod phases in the work of Carter et al.<cit.>, as pointed out in Ref. <cit.>. Further applications of the fully self-consistent band theory to higher spatial dimensions have been highly desired. However, in the aforementioned works pairing correlations were neglected and it is not at all obvious how superfluidity affects the entrainment effect. In Ref. <cit.>, it was argued, within the Bardeen-Cooper-Schrieffer (BCS) approximation, that pairing correlations do not affect much the results and the strong entrainment would remain. In later studies, pairing effects were studied for a 1D periodic potential within the BCS as well as Hartree-Fock-Bogoliubov (HFB) [also called Bogoliubov-de Genns (BdG)] approximations, showing that the BCS treatment is not enough and a self-consistent treatment of pairing correlations is essentially important to correctly quantify the entrainment effect <cit.>. However, none of those studies are fully self-consistent. In neutron-star matter, there is no “external” potential (except the gravitational one that is negligible in investigating nuclear-scale microphysics) and neutrons and protons are self-organizing to arrange a variety of crystalline structures. Therefore, it is an imperative task to develop a fully self-consistent framework to draw a clear conclusion on the magnitude of the entrainment effect. In the present paper, before extending the framework to higher spatial dimensions, we shall first develop a formalism of fully self-consistent, microscopic time-dependent superfluid band calculations based on TDDFT for superfluid systems. According to the Hohenberg-Kohn theorem <cit.> with the Kohn-Sham scheme <cit.>, DFT can, in principle, be an exact approach to quantum many-body problems. Its time-dependent extension, based on the Runge-Gross theorem <cit.>, gives us access to non-linear dynamics as well as excited states of quantum many-body systems. While (TD)DFT for superfluid (superconducting) systems was initially formulated with a non-local pair potential Δ(r,r') <cit.>, subsequent developments of its local treatment with a proper pairing renormalization resulted in the so-called (time-dependent) superfluid local density approximation [(TD)SLDA] <cit.>. In the nuclear physics context, on the other side, (TD)DFT was originally developed as mean-field theories with effective nucleon-nucleon interactions, like the Skyrme (TD)HF approach. Because of the historical reason, one may confuse (TD)SLDA as (TD)HFB with a zero-range effective pairing interaction. We note, however, that the pairing renormalization scheme <cit.> makes the theory cutoff independent if it is taken to be sufficiently large and allows one to work with a local pairing field Δ(r). Thus, in the same way as we regard Skyrme (TD)HF as (TD)DFT, we can regard Skyrme (TD)HFB as superfluid (TD)DFT, if such a proper pairing renormalization scheme is adopted. TDSLDA <cit.> has been successfully applied not only for nuclear systems, but also for cold-atomic systems. In this work, we develop a fully self-consistent (time-dependent) superfluid band theory based on (TD)SLDA, imposing the Bloch's boundary condition to quasiparticle wave functions, which corresponds to an extension of the previous work <cit.> for superfluid systems. By applying the formalism to the slab phase of neutron-star matter under the β-equilibrium condition, we demonstrate the validity of our formalism and quantify the effects of superfluidity and band structure. The article is organized as follows. In Sec. <ref>, we explain detailed formalisms of the self-consistent time-dependent superfluid band theory for the inner crust of neutron stars, and provide computational details in Sec. <ref>. In Sec. <ref>, we present the results of time-dependent superfluid band calculations for the slab phase of nuclear matter under the β equilibrium condition. In Sec. <ref>, conclusions and prospect are given. § FORMULATION §.§ The HFB theory §.§.§ The matrix representation Let us first succinctly recapitulate the general framework of the HFB theory, clarifying our notations. Here we start with a generic Hamiltonian in the second quantization form, Ĥ = ∑_klt_klâ_k^†â_l + 1/4∑_klmnv̅_klmnâ_k^†â_l^†â_nâ_m, where â_k and â_k^† are particle annihilation and creation operators, respectively, that obey the Fermionic anti-commutation relations: i.e. {â_k,â_l^†}=δ_kl and {â_k, â_l}={â_k^†,â_l^†}=0. t_kl and v̅_klmn (≡ v_klmn-v_klnm) are usual matrix elements of a one-body kinetic energy operator and a two-body interaction, respectively, where the latter one is anti-symmetrized for convenience. In the HFB theory, quasiparticle annihilation and creation operators, β̂_μ and β̂_μ^†, respectively, are introduced via the Bogoliubov transformation of â_k and â_k^† as follows: (β̂ β̂^†) = 𝒲^†(â â^†). Here we have introduced two column vectors, β̂≡(β̂_1, β̂_2, …, β̂_M)^T and β̂^†≡(β̂_1^†, β̂_2^†, …, β̂_M^†)^T, for convenience, where M corresponds to the dimension of basis states. The 2M×2M Bogoliubov transformation matrix 𝒲 can be written as 𝒲 = (U V^* V U^*), where U and V are M× M matrices. The Bogoliubov transformation matrix is unitary, i.e. 𝒲^† 𝒲=𝒲𝒲^†=I_M, with I_M being an M-dimensional identity matrix. The latter property ensures that β̂_μ and β̂_μ^† obey the Fermionic anti-commutation relations. One can write down explicitly the quasiparticle annihilation and creation operators, respectively, as follows: β̂_μ = ∑_i( U^*_iμ â_i + V^*_iμ â_i^†), β̂_μ^† = ∑_i( U_iμ â_i^† + V_iμ â_i). Note that we use Greek indices (such as μ,ν,…) for labeling positive-energy quasiparticle states, while Roman indices (such as i,j,…) are used for labeling single-particle states, except some obvious cases. The HFB state, the trial many-body wave function for a variation, is defined as a vacuum of quasiparticles, i.e. β̂_μ|HFB>=0 for all μ. Based on the variational principle, one can derive the well-known HFB equation: (h-λ Δ -Δ^* -h^*+λ)(U_μ V_μ) = E_μ(U_μ V_μ), where h=t+Γ denotes the single-particle Hamiltonian matrix with matrices for the mean-field potential Γ and the pair potential Δ, λ is the chemical potential, and U_μ and V_μ represent the μ-th column of the U and V matrices, respectively. The (k,l) component of the mean-field potential and the pair potential are defined, respectively, as follows: Γ_kl = ∑_mnv̅_kmlnρ_nm, Δ_kl = 1/2∑_mnv̅_klmnκ_mn. Here the one-body density matrix, ρ, and the pairing tensor, κ, are defined as ρ_kl = <â_l^†â_k> = (V^*V^T)_kl, κ_kl = <â_lâ_k> = (V^*U^T)_kl, where the brackets, <⋯>, represent an expectation value in the HFB state. This is the usual matrix representation of the HFB theory. By definition, ρ is hermitian (ρ^†=ρ) and κ is skew symmetric (κ^T=-κ). §.§.§ Treatment of a system with certain symmetries in the coordinate-space representation Next, let us consider a case where a system possesses certain symmetries, and introduce the coordinate-space representation. The formulas given here are actually useful to formulate the superfluid band theory with the Bloch's boundary condition in the next section. When a system possesses symmetries, the Hamiltonian commutes with associated operators and there are conserved quantities with which one can classify quantum states. For instance, for a system with the spherical symmetry, the orbital angular momentum L and its projection m are conserved, being good quantum numbers. In such a case, states with different values of (L,m) are not mixed, and the Hamiltonian matrix can be arranged to have a block diagonal form. In the following we shall denote such a set of arbitrary good quantum numbers as Ω. Since single- and quasi-particle states can be classified by the set of quantum numbers Ω, we may explicitly indicate it as μ={ν Ω} and i={j Ω}. For a system with symmetries, the quasiparticle annihilation and creation operators, Eqs. (<ref>) and (<ref>), respectively, can be written as β̂_νΩ = ∑_j( U^*_jΩ,νΩ â_jΩ + V^*_jΩ̅,νΩ â_jΩ̅^†), β̂_νΩ^† = ∑_j( U_jΩ,νΩ â_jΩ^† + V_jΩ̅,νΩ â_jΩ̅). Here Ω̅ stands for the same set of quantum numbers as Ω, but any “countable” quantum numbers involved in it have opposite sign [namely, for the case of Ω=(L,m), Ω̅=(L,-m)]. It is simply because a “hole” of a state with countable quantum number(s) could be characterized like a particle which has an opposite sign for any countable quantum number(s). In this way, the quantum numbers in the left- and right-hand-side of Eqs. (<ref>) and (<ref>) are being consistent. To obtain the coordinate-space representation with the spin degree of freedom, let us introduce the field operators, ψ̂(rσ) = ∑_jΩϕ_jΩ(rσ) â_jΩ, ψ̂^†(rσ) = ∑_jΩϕ_jΩ^*(rσ) â_jΩ^†. Note that the summation is taken over all single-particle states (i={j Ω}) by definition. Here ϕ_jΩ(rσ)≡<rσ|jΩ> =<rσ|â_jΩ^†|0> denotes the single-particle wave function, where |0> is the vacuum state. The field operator creates a particle with spin σ (= or ) at a position r in the vacuum, i.e. |rσ> =ψ̂^†(rσ)|0>. Using the orthonormal properties of the single-particle wave functions, <ϕ_k|ϕ_l>≡∑_σ∫ϕ_k^*(rσ)ϕ_l(rσ)k=δ_kl, the particle annihilation and creation operators can be represented, respectively, as follows: â_jΩ = ∑_σ∫ϕ_jΩ^*(rσ)ψ̂(rσ) r, â_jΩ^† = ∑_σ∫ϕ_jΩ(rσ)ψ̂^†(rσ) r. Substituting Eqs. (<ref>) and (<ref>) into Eqs. (<ref>) and (<ref>), one finds β̂_νΩ = ∑_σ∫( u^*_νΩ(rσ)ψ̂(rσ) + v^*_νΩ(rσ)ψ̂^†(rσ) )r, β̂_νΩ^† = ∑_σ∫( u_νΩ(rσ)ψ̂^†(rσ) + v_νΩ(rσ)ψ̂(rσ))r, where u_νΩ(rσ) ≡ ∑_j U_jΩ,νΩ ϕ_jΩ(rσ), v_νΩ(rσ) ≡ ∑_j V_jΩ̅,νΩ ϕ^*_jΩ̅(rσ). Those are the coordinate-space representation of the u and v components of the quasiparticle wave functions. The quasiparticle wave functions are normalized to be ∑_σ∫[|u_νΩ(rσ)|^2 + |v_νΩ(rσ)|^2 ] r = 1. In the same way as in the matrix representation, a variational calculation leads to the corresponding coordinate-space representation of the HFB equation: ∫ dr' [ h_↑(r,r')-λ h_↑(r,r') 0 Δ(r,r'); h_↓(r,r') h_↓(r,r')-λ -Δ(r,r') 0; 0 -Δ^*(r,r') -h^*_↑(r,r')+λ -h^*_↑(r,r'); Δ^*(r,r') 0 -h^*_↓(r,r') -h^*_↓(r,r')+λ ][ u_νΩ(r'); u_νΩ(r'); v_νΩ(r'); v_νΩ(r') ] = E_νΩ[ u_νΩ(r); u_νΩ(r); v_νΩ(r); v_νΩ(r) ]. In the coordinate-space representation, the number and anomalous densities are defined as n(r)≡∑_σ n(rσ,rσ) and κ(r,r')≡κ(r,r'), respectively, where n(rσ,r'σ') ≡ <ψ̂^†(r'σ')ψ̂(rσ)> = ∑_iΩ_1jΩ_2νΩϕ^*_iΩ_1(r'σ')ϕ_jΩ_2(rσ) V^*_jΩ_2,νΩV^T_νΩ,iΩ_1 = ∑_νΩ v^*_νΩ(rσ)v_νΩ(r'σ'), κ(rσ,r'σ') ≡ <ψ̂(r'σ')ψ̂(rσ)> = ∑_iΩ_1jΩ_2νΩϕ_iΩ_1(r'σ')ϕ_jΩ_2(rσ) V^*_jΩ_2,νΩU^T_νΩ,iΩ_1 = ∑_μΩ v^*_μΩ(rσ)u_μΩ(r'σ'). When the single-particle Hamiltonian contains no terms that mix spin states (i.e. h_=h_=0), the HFB equation (<ref>) can be decomposed into two equations with a half of the original dimension as ∫ dr' [ h(r,r')-λ Δ(r,r'); Δ^*(r,r') -h^*(r,r')+λ ][ u_νΩ(r'); v_νΩ(r') ] = E_νΩ[ u_νΩ(r); v_νΩ(r) ], where h=h_↑=h_↓. From a diagonalization of the HFB matrix, one obtains not only the quasiparticle states, but also the quasihole states with negative eigenvalues, -E_νΩ. Thanks to this so-called quasiparticle-quasihole symmetry, the other spin component of quasiparticle wave functions, i.e. (v_νΩ^*(r), u_νΩ(r))^T for -E_νΩ, can be found in the negative energy states. §.§ Superfluid band theory with a Skyrme-type EDF §.§.§ The Bloch's boundary condition What one has to do is to apply the Bloch's boundary condition for superfluid systems. The essence of the band theory of solids is to impose the periodicity of the crystal to the wave functions of the system. According to the Floquet-Bloch theorem, it can be achieved, representing the single-particle wave functions by modulated plane waves <cit.>, ϕ_jk^(q)(rσ) = 1/√(𝒱)ϕ_jk^(q)(rσ)e^ik· r, where 𝒱 stands for the volume of a unit cell and k is the Bloch wave vector. In this section, we explicitly indicate the isospin degree of freedom by an index q, where q=n for neutrons and q=p for protons. The periodicity of the system is then encoded into the function ϕ_jk^(q)(r) as ϕ_jk^(q)(r+T) = ϕ_jk^(q)(r), where T is the lattice translation vector. We will refer to the dimensionless function ϕ_jk^(q)(r) as a Bloch wave function. If the potential is local in space, there is no correlation between unit cells and the Hamiltonian can be written in a block diagonal form. It is thus possible to regard the Bloch wave vector k as a sort of quantum numbers associated with a translational symmetry in a broad sense. That is, we can regard the Bloch wave vectors k and k̅ (=-k) as the countable quantum number Ω and Ω̅, respectively, which were introduced in Sec. <ref>. We note that the form of single-particle wave functions is the same for ϕ_jk̅^(q)(rσ), meaning that the sign of the exponent is different for ϕ_jk̅ because of the relation, k̅=-k. By substituting Eq. (<ref>) into Eqs. (<ref>) and (<ref>), we find that the quasiparticle wave functions, u_νk(rσ) and v_νk(rσ), can be written as follows: u_νk(rσ) = 1/√(𝒱)u_νk(rσ)e^ik· r, v_νk(rσ) = 1/√(𝒱)v_νk(rσ)e^ik· r, where u_νk(r+T)=u_νk(r) and v_νk(r+T)=v_νk(r) hold. Notice that the sign of the exponent is the same for both u and v components. It is an important key to formulate the superfluid band theory as shown in the subsequent sections. In the present paper, we consider 1-dimensional (1D) crystalline structure, i.e. the slab (or “lasagna”) phase of nuclear matter, where slabs are extending parallel to xy plane and are in a periodic sequence along z direction; See Fig. <ref>. In this case the lattice vector T reads T = T_xê_x + T_yê_y + a n_zê_z, where T_x and T_y are arbitrary real numbers, n_z is an integer, a denotes the period of (or distance between) the slabs, and ê_i is the unit vector along the i (=x, y, or z) direction. Since the single-particle wave functions along the x and y directions are solely the plane waves, the Bloch's boundary condition, Eq. (<ref>), is now reduced to ϕ^(q)_jk(z+a) = ϕ^(q)_jk(z). That is, the Bloch wave functions have 1D spatial dependence: u^(q)_jk(rσ) = 1/√(𝒱)u^(q)_νk(zσ)e^ik·r, v^(q)_jk(rσ) = 1/√(𝒱)v^(q)_νk(zσ)e^ik·r. The z component of the Bloch wave vector can be reduced within the first Brillouin zone, -π/a≤ k_z ≤π/a. In practical calculations, we discretize the first Brillouin zone into N_k_z points, i.e. Δ k_z = 2π/a1/N_k_z. This implementation corresponds to a calculation with a length ℒ≡ a N_k_z with the ordinary periodic boundary condition. Thus, the normalization condition (<ref>) now reads 𝒜 ∑_σ∫_0^ℒ[ |u_νΩ(zσ)|^2+|v_νΩ(zσ)|^2 ] z = 1, where 𝒜 stands for the normalization area such that 𝒱=ℒ𝒜. Because of the periodicity of the quasiparticle wave functions, Eq. (<ref>) is equivalent to ∑_σ∫_0^a[ |u_νΩ(zσ)|^2+|v_νΩ(zσ)|^2 ] z = a. §.§.§ Energy density functionals The main ingredient of DFT is the EDF. In this work, the EDF used is almost the same as given in the previous work <cit.>, except the introduction of a pairing functional. For completeness, here we briefly recall the ingredients, because some equations require a caution specific to the superfluid systems. The total energy per nucleon of the system is given by E_tot/A = 1/N_b∫_0^a ( ℰ_nucl(z) + ℰ_e(z) ) z, where ℰ_nucl denotes a nuclear part of energy density and ℰ_e is the electron's one. Here, N_b=N_n+ N_p [The subscript `b' stands for `baryons' (nucleons)], where N_q=∫_0^a n_q(z) z is the total number of neutrons (q=n) or protons (q=p) per unit area within a single period a, with n_q(z) being the number density of neutrons (q=n) or protons (q=p). The nuclear energy density is given as a sum of kinetic, nuclear (interaction), and pairing energy densities which are, respectively, given by ℰ_kin(z) = ∑_q=n,pħ^2/2m_qτ_q(z), ℰ_Sky(z) = ∑_t=0,1[ C^ρ_t[n_0]n^2_t(z) + C^Δρ_t n_t(z)∂_z^2 n_t(z) + C^τ_t(n_t(z)τ_t(z) - j_t^2(z)) ], ℰ_pair(z) = -∑_q = n,pΔ_q(z)κ_q^*(z) where m_q is the nucleon mass and ∂_z represents a spatial derivative with respect to the z coordinate. In this work, we develop a formalism for a widely-used Skyrme-type EDF for the nuclear EDF. The kinetic and momentum densities in ℰ_Sky (<ref>) are formally defined, respectively, by τ_q(r) = (∇·∇') n_q(r,r')|_r=r', j_q(r) = Im[(∇-∇') n_q(r,r')]|_r=r'. The time-odd momentum densities vanish in static calculations, while they are, in general, finite in a dynamic situation. The subscript t in Eq. (<ref>) specifies isoscalar (t=0) and isovector (t=1) densities, which are defined, for the number density for instance, by n_0(z)=n_n(z)+n_p(z) and n_1(z)=n_n(z)-n_p(z), respectively (the subscript 0 is often omitted). The detailed expressions of the coefficients by means of Skyrme force parameters can be found in, e.g., Eq. (A1) in Ref. <cit.> (where symbols A_t^X were used instead of C_t^X here). C_t^ρ[n_0] depends on the local number density as C_t^ρ[n_0]=C_t^ρ + C_t,D^ρ n_0^γ(z) as in Ref. <cit.>. To evaluate the densities Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we need to take summations over all positive-energy quasiparticle states μ={ν k}. For the slab phase under study, the summation over k_x and k_y can be replaced with integrals, ∑_k_x,k_y→∫𝒜 k_x k_y/(2π)^2 =∫𝒜 k_∥ k_∥/(2π), where k_∥≡(k_x^2+k_y^2)^1/2 is the magnitude of the Bloch wave vector parallel to the slabs. Then, the various densities can be written as follows: n_q(z) = ∑_ν k_zσ∫k_∥ k_∥/2π L|v^(q)_νk(zσ)|^2, τ_q(z) = ∑_ν k_zσ∫k_∥ k_∥/2π L[ k_∥^2 | v^(q)_νk(zσ)|^2 + | (∂_z +ik_z)v^(q)_νk(zσ)|^2 ], j_q(z) = -∑_ν k_zσ∫k_∥ k_∥/2π L ×[ v^(q)*_νk(zσ)(∂_z +ik_z) v^(q)_νk(zσ) ]ê_z, κ_q(z) = ∑_ν k_z∫k_∥ k_∥/2π L v^(q)*_νk(z)u^(q)_νk(z). Note that a minus sign appears in the current density (<ref>), due to the definition (<ref>) that relates v_νΩ (rσ) and ϕ_jΩ^*(rσ). The pairing part of EDF, ℰ_pair (<ref>), contains the pairing field Δ_q(z) which is given by Δ_q(z) = -g_q,eff(z)κ_q(z), where g_q,eff(z) in Eq. (<ref>) is an effective pairing coupling constant <cit.>, which can be calculated as follows: 1/g_q,eff(z) = 1/g_0 - m^⊕_q(z)/4π^2ħ^2π/aK, where g_0 is the bare coupling constant and K is a numerical value that corresponds to the integral, K = 12/π∫^4/π_0 ln(1+1/cos^2θ) θ = 2.442749607806335…. We set a bare coupling constant to g_0=200 MeV, in the present work. The Coulomb part of EDF reads ℰ_Coul^(p)(z) = 1/2n_p(z)V_Coul(z) - 3e^2/4(3/π)^1/3n_p^4/3(z), where V_Coul denotes the Coulomb potential for protons and e is the elementary charge. The Slater approximation has been adopted for the Coulomb exchange term. For the Coulomb potential, we solve the Poisson equation, ^2/ z^2V_Coul(z) = -e^2/ε_0n_ch(r), where ε_0 is the vacuum permittivity. n_ch(z) ≡ n_p(z) - n_e denotes the charge density, neglecting the charge form factor of protons. Electrons are assumed to be uniformly distributed with the density n_e = n̅_p, where n̅_q=1/a∫_0^an_z(z) z is the average nucleon number density per unit area. The Coulomb potential is subjected to the charge neutrality condition, ∫_0^a V_Coul(z) z=0. For the electron's EDF, we use formulas for a relativistic electron gas. For explicit expressions, We refer readers to Refs. <cit.>. §.§.§ Skyrme (TD)SLDA equations for the slab phase From an appropriate functional derivative, one can derive the corresponding single-particle Hamiltonian which enters the (TD)SLDA equation. Because our working EDFs are local in space, the resulting equations become also local ones. In the same way as normal self-consistent band theory <cit.>, the point is that an operation of spatial derivative on a quasiparticle wave function (<ref>) generates an additional k-dependent term as ∇u_νΩ^(q)(rσ) = 1/√(𝒱) e^ik· r(∂_z+ik) u_νΩ^(q)(zσ), where the same is true also for the v component (<ref>). Substituting Eqs. (<ref>) and (<ref>) into a localized version of Eq. (<ref>), and factor out the common function e^ik· r, we obtain (h^(q)(z)+h^(q)_k(z)-λ_q Δ_q(z) Δ_q^*(z) -h^(q)*(z)-h^(q)_k(z)+λ_q) ×(u^(q)_νk(z) v^(q)_νk(z)) = E_νk(u^(q)_νk(z) v^(q)_νk(z)). It should be noted that, while we deal with a three-dimensional system, the equations to be solved are essentially 1D ones, significantly reducing the computational cost. The single-particle Hamiltonian ĥ^(q)(z) is given by ĥ^(q)(z) = -÷ħ^2/2m^⊕_q(z) + U^(q)(z) + 1/2i[ ÷I^(q)(z) + I^(q)(z)·]. Here m^⊕_q(z) represents “microscopic” effective mass which should be distinguished from “macroscopic” one discussed later. The microscopic effective mass is defined as ħ^2/2m^⊕_q(z) = ħ^2/2m_q + ∑_q^' = n, pC^τ(q)_q^'n_q^'(z). U^(q)(z) and I^(q)(z) are time-even and time-odd mean field potentials, respectively, defined as U^(q)(z) = ∑_q^' = n, p[ 2C^ρ(q)_q^'n_q^'(z) + 2C^∇ρ(q)_q^'∂^2_z n_q^'(z) + C^τ(q)_q^'τ_q^'(z) + 2n^α(z)C^ρ(q)_q^' Dn_q^'(z) ] + α n^α-1(z)∑_t = 0,1C^ρ_tDn^2_t(z) + U_Coul(z)δ_qp + ∑_q^' = n,p∂ g_q^',eff(n)/∂ n_q(z)|κ_q^'(z)|^2, I^(q)(z) = -2 ∑_q^' = n, pC^τ(q)_q^'j_q^'(z), where U_Coul(z) = V_Coul(z) - e^2 ( 3/π)^1/3n_p^1/3(z). Following the previous work <cit.>, we have defined a shorthand notation, C_n^X(q) ≡ C_0^X - η_q C_1^X, C_p^X(q) ≡ C_0^X + η_q C_1^X, where X stands for the superscript of the coefficients, i.e.ρ, τ, or Δρ, and η_q=+1 (-1) for protons (neutrons). We note that the time-odd potential vanishes in a static situation. Note that an additional contribution arises in U^(q)(z) from the density dependence of the effective pairing coupling constant. The single-particle Hamiltonian which depends on the Bloch wave vector, h^(q)_k(z), can be represented as follows: h^(q)_k = ħ^2k^2/2m^⊕_q(z) + ħk·v̂^(q)(z), where v̂^(q)(z) is the so-called velocity operator, v̂^(q)(z) = 1/iħ[ r, ĥ^(q)(z) ] = -iħ( 1/2m^⊕_q(z) + 1/2m^⊕_q(z)) + 1/ħI_q(z). As will be described in Sec. <ref>, we apply the real-time method, proposed in Ref. <cit.>, where we extract the collective mass of a slab and that of protons from a dynamic response of the system to an external force. The external force can be introduced by means of a time-dependent, uniform vector potential A_z(t) that couples only with protons which are localized inside slabs. Time evolution of the system is described by the TDSLDA equation in the velocity gauge <cit.>: (h^(q)(z,t)+h^(q)_k(t)(z,t) Δ_q(z,t) Δ_q^*(z,t) -h^(q)*(z,t)-h^(q)_k(t)(z,t)) ×(u^'(q)_νk(z,t) v^'(q)_νk(z,t)) = iħ∂/∂ t(u^'(q)_νk(z,t) v^'(q)_νk(z,t)), where the prime on the quasiparticle wave functions indicates that they are in the velocity gauge, u_νk^'(q)(z,t) ≡exp[ -ie/ħ cA_z(t)z ] u_νk^(q)(z,t), and the same is true for the v components (<ref>). Notice that the Bloch wave vector in the k-dependent single-particle Hamiltonian (<ref>) in Eq. (<ref>) is shifted as a function of time, according to the following relation: k(t) = k + e/ħ cA_z(t)ê_z. All densities can be expressed in terms of the quasiparticle wave functions in the velocity gauge, replacing u_νk^(q)→u_νk^'(q), v_νk^(q)→v_νk^'(q), and k_z→ k_z(t). More detailed explanations on the expressions in the velocity gauge can be found in Ref. <cit.>. § COMPUTATIONAL DETAILS We have newly developed a parallel computational code that reproduces all the results presented in Ref. <cit.> and extended it to include superfluidity. All the calculations were carried out with Skyrme SLy4 EDF <cit.> as in Ref. <cit.>. We consider a situation where nuclear slabs extend along xy directions forming a perfect crystalline structure along z axis with a period a. We discretize the z coordinate into a uniform grid with spacing Δ z to represent qusiparticle wave functions. The spectral method with Fourier transformations are used for the first and second spacial derivatives. The Poisson equation for the Coulomb potential is solved with the fast Fourier transform algorithm. We discretize the first Brillouin zone-π/a ≤ k_z ≤π/a into N_k_z points. For the Bloch wave vector parallel to the slabs, k_∥, we introduce a cutoff k^max_∥ and discretize it with a Δ k_∥ step. For the calculations presented in this paper, we find that Δ z = 0.5 fm, Δ k_∥ = 0.01 fm^-1, k^max_∥ = 1.5 fm^-1, and N_k_z = 80 provides convergent results. For time evolution, we use the eighth-order Taylor expansion method with a single predictor-corrector step with Δ t = 0.1 fm/c. With these computational parameters given above, we have confirmed that the total number of nucleons and the total energy per nucleon are conserved with 10^-6 and 10^-10 accuracy, respectively, within the simulation time of 4,000 fm/c. For the extraction of the collective mass of a slab, we dynamically introduce an external potential for protons, E_z(t) = S(α,w,t)E_z, as a function of time t, where S(α,w,t) = 1/2 + 1/2tanh[αtan(π t/w - π/2)], is a switching function which varies smoothly from 0 to 1 within an interval t=[0,w]. In this way we can avoid unnecessary excitations of the system. In the present paper, α=1 and w=2000 fm/c were used and calculations were continued up to t=4000 fm/c. Roughly speaking, the slab period a under study is about a≈30 fm and, thus, the number of grid points along z coordinate is N_z≈60. In such a case, the total number of quasiparticle wave functions that need to be solved in the current computational setup is estimated to be N_z× N_k_z× N_k_∥×2 (isospin)×2 (u and v) = 60× 80× 150× 4 = 2,880,000. That is, although 1D equations look easily tractable, one has to deal with millions of complex, non-linear, partial differential equations for the quasiparticle wave functions. Note that if the single-particle Hamiltonian involves a term that mixes spins the number increases by a factor of two, because of an explicit treatment for the spin degree of freedom. Our code is parallelized with respect to the Bloch wave number k_z using the message passing interface (MPI). § RESULTS AND DISCUSSION §.§ On the convergence of self-consistent calculations To solve the static SLDA equation, we performed iterative diagonalizations of the Hamiltonian matrix. At every step m, the chemical potentials for neutrons and protons λ_q were adjusted to provide correct particle numbers as λ^(m+1)_q = λ_q^(m) - α_λ (N_q^(m+1) - N^(0)_q) with α_λ = 50, N_q^(0) denotes the initial, preset of the nucleon number. From the calculations, we realized that a use of the modified Broyden's method (see Ref. <cit.> and references therein) is crucial to get a convergent result with a reasonable computational time. The modified Broyden mixing was applied to the mean field potentials and the chemical potentials so that the unitarity of the Hamiltonian matrix is always preserved <cit.>. The Broyden vector thus reads V={M_n(𝐫),M_p(𝐫),U_n(𝐫), U_p(𝐫), Re[Δ_n], Im[Δ_n],Re[Δ_p], Im[Δ_p],μ_n,μ_p}, i.e., with 8N_z+2 dimension. The modified Broyden's method also contains a parameter α that controls a rate of mixing, which we set α=0.1. As a typical example, we show in Fig. <ref> change in total energy per nucleon, Δ E≡|(E_tot^(m)-E_tot^(m-1))/A|, as a function of the number of diagonalization m for n_b=0.05 fm^-3 under the β equilibrium condition. A result obtained with a simple linear mixing is also presented. In the linear mixing method, potentials were updated simply according to X^(m+1)=(1-α)X^(m) + α X'^(m) with α=0.1, where X^'(n) stands here for a tentative potential obtained after the mth diagonalization of the Hamiltonian matrix. From the figure, one finds that the linear mixing may be good for the first 10–20 iterations, but its convergence is very slow, which cannot reach Δ E<10^-6 MeV even after 200 diagonalizations. In contrast, we find a good convergent behavior with the modified Broyden's method, where the energy change decreases exponentially down to 10^-14 MeV. In the present paper, we set a convergence criterion as Δ E<10^-10 MeV. §.§ On the optimal slab period In the self-consistent band theory calculations, the slab period a is linked to the size of the computational region, N_zΔ z. Therefore, to figure out the optimal slab period, we need to repeat static calculations by changing the number of latiice points, N_z. Figure <ref> shows total energy (i.e., the sum of nuclear and electron parts) per nucleon as a function of the slab period a for n_b=0.04, 0.05, 0.06, and 0.07 fm^-3 under the β equilibrium condition. The results of superfluid band theory calculations are shown by solid circles connected with solid lines, while those of normal band theory (without pairing correlations) are shown by open circles connected with dotted lines. From the figure, we find that total energy is always lower with superfluidity, gaining energy through the pairing correlations, as expected for the inner crust of neutron stars. In addition, a parabolic behavior is visible, which is associated with the balance between Coulomb and nuclear interactions. While a too small slab period is energetically unfavorable due to the Coulomb repulsion, a too large value results in loss of nuclear attraction. In the figure, the optimum slab period a that minimizes the total energy is indicated by an arrow. It is visible from Figs. <ref>(c) and (d) that inclusion of superfluidity could slightly affect the period a. §.§ Static properties and band structure The results of fully self-consistent superfluid band calculations are summarized in Table. <ref>. The proton fraction, Y_p, is determined by the β equilibrium condition and it takes value of around 0.033–0.035. The optimal slab period a decreases with the baryon number density increases. In Fig. <ref>, we show a typical density distribution (a) and absolute value of the pairing field (b) for n_b=0.05 fm^-3 in the β equilibrium. Because of the small proton fraction Y_p=0.033, there are substantial portion of dripped neutrons outside of the slab, whose density distribution looks quite diffusive (or “melting,” close to the uniform nuclear matter). This characteristic is more pronounced in the case of distribution of the pairing field. The absolute value of the pairing field is much larger for neutrons, but one can see that protons are also superconducting although Δ_p is small. Since substantial amount of neutrons are dripped outside of the slab, we can expect the formation of band structure. Now, the question is: how the superfluidity affects the band structure? Figure <ref> shows the obtained quasiparticle energies E_νk^(q) for n_b=0.05 fm^-3 in the β equilibrium, as an illustrative example. In Fig. <ref>(a), the results are plotted as a function of the Bloch wave number k_z within the first Brillouin zone, -π/a≤ k_z≤π/a, with k_∥=0, while those are plotted as a function of k_∥ in Fig. <ref>(b). One should keep in mind that quasiparticle energies are related to single-particle ones in the canonical basis as E_μ = ±√((ε_μ-λ)^2 + Δ^2). From Fig. <ref>(a), deep-hole states which are within the potential well, ε_μ<U_n^0, can be seen as horizontal lines above 30 MeV. In the figure, one can see there are many other lines which show k_z dependence, which are associated with dripped neutrons that extend spatially. In Fig. <ref>(a), on the other hand, we find there are parabolic curves, some of which are convex upward and the rest is opposite. Actually, those curves which are convex upward are contributed from states below the chemical potential. Clearly, quasiparticle energies exhibit complex dependence on the Bloch wave vector k, which we call the band structure. §.§ Quasiparticle Resonances An intriguing phenomenon characteristic to superfluid systems is the so-called quasiparticle resonances. Usually, quasiparticle resonances are studied in the context of a usual neutron scattering process in a vacuum. It is a resonance associated with pairing correlations, where an incoming neutron deposits part of its kinetic energy to the target nucleus, inducing a particle-hole excitation of a bound neutron, and the excited neutron and the incoming one form a Cooper pair which behaves as a resonance. Here we show, within the fully self-consistent superfluid band theory calculations, that quasiparticle resonances do present even in the inner crust of neutron stars, where superfluid neutrons permeate crystalline nuclear matter. An intuitive way to distinguish quasiparticle resonances among the others is to analyze the occupation probabilities as a function of single-particle energy. However, what we obtain as a solution of the SLDA equation is a set of quasiparticle energies, not the single-particle ones, and one should evaluate the latter in an appropriate manner. In the present work, instead of introducing the canonical basis, here we use a simpler method. As an effective way to link quasiparticle energies with single-particle ones, we take the following procedure: * We solve the SLDA equation self-consistently and obtain quasiparticle energies E_νk^(q) as well as densities, say, ρ_g.s., and corresponding single-particle Hamiltonian h_g.s.^(q)≡ h^(q)[ρ_g.s.]+h_k^(q)[ρ_g.s.]. * Next, we diagonalize the single-particle Hamiltonian h_gs^(q) and obtain single-particle energies, say, ε̆_ν'k^(q). Note that we put a breve accent to indicate that they are not exactly equal to the true single-particle energies, ε_νk^(q). The indices {ν'k} are labeling the single-particle energies ε̆ _ν'k^(q) in the ascending order. * We then evaluate the corresponding quasiparticle energies using the relation, Ĕ_ν'k^(q) = √((ε̆_ν'k^(q)-λ_q)^2+Δ_q^2), where Δ_q=1/N_q∫_0^a |Δ^(q)(z)|n_q(z) z represents the average absolute value of the pairing field. * We reorder the obtained quasiparticle energies, Ĕ_ν'k^(q) (<ref>), in the ascending order, Ĕ_ν'k^(q)→Ĕ_νk^(q), where we store the correspondence between the indices, {ν'k}⇔νk}. * Based on the following correspondences, ε_ν'k^(q) ≃ ε̆_ν'k^(q) ⇔ Ĕ_νk^(q) ≃ E_νk^(q), we regard that the single-particle energies ε_ν'k^(q) are associated with the states having occupation probabilities, n_νk^(q) = 1/a∫_0^a |v_νk^(q)(z)|^2 z. The estimated quasiparticle energies, Ĕ_νk^(q) (<ref>), may not exactly be equal to the true ones, E_νk^(q), but in the most cases we have found a good correspondence, Ĕ_νk^(q)≃ E_νk^(q), with a correct ordering of the quasiparticle energies. In Fig. <ref>, we show neutron occupation probabilities n_νk^(n) as a function of single-particle energies ε_ν'k^(n), calculated for n_b=0.05 fm^-3 in the β equilibrium, as a typical example. Since single-particle energies are the same for ± k_z, results are plotted for a half of the first Brillouin zone, 0≤ k_z≤π/a. The plot is restricted to k_∥=0 in Fig. <ref>(a) for better visibility, while occupation probabilities for all states within a plotting range of -20 MeV≤ε _ν'k^(n)≤40 MeV are presented in Fig. <ref>(b). First, let us focus on the k_∥=0 case, shown in Fig. <ref>(a). From the figure, we see that occupation probabilities globally follow a Fermi-Dirac-type distribution, indicating that the above mentioned procedure works quite well. In addition to that, we find that there appear several irregular dips and peaks below and above the chemical potential λ_n≃10.8 MeV, respectively. Clearly, the peaks embedded in the continuum, paired up with dips in bound states, manifest the expected characteristic of quasiparticle resonances, that is, they form Cooper pairs to gain pairing energy. To serve another look at its behavior, we show in Fig. <ref>(a) single-particle energies of neutrons ε_ν'k^(n) as a function of the Bloch wave number k_z in the first Brillouin zone, -π/a ≤ k_z≤π/a, with k_∥=0. In the figure, occupation probabilities are indicated by the opacity of the data symbols. From the figure, we find that the single-particle energies nicely reproduce the band structure. On top of that, we can clearly see that there are four states above the chemical potential, which sustain noticeable occupation probabilities. A closer look at the results reveals that hole-like states with relatively small occupation probabilities are present at the same k_z below the chemical potential, meaning that Cooper pairs are formed between states with the same Bloch wave vector k. We note that these quasiparticle resonances could be observed, thanks to the band theory calculations, i.e., they are absent for the k_z=0 case, in this example. We consider that the essence is a better treatment of the continuum states—the resonances would not be resolved if there were no pairs of states with the same |ε_αk^(n)-λ_n|. In addition, we plot in Fig. <ref>(b) the single-particle energies as a function of k_∥ as well. From these figures, we find that pairing is simply associated with single-particle states around the chemical potential. If there were a large band gap and the chemical potential were located in between the gap, pairing would have been suppressed. In the slab phase under study, however, band gaps are only a few to tens of keV, which are much smaller than the average absolute value of the pairing field, Δ_n≈1 MeV. Thus, pairing properties are insensitive to the band structure, in the present case. In Fig. <ref>(b), we show occupation probabilities for all states (k_∥≥0) as a function of single particle energies. Intriguingly, we find that there are a number of quasiparticle resonances for a range of single-particle energies, as can be seen from the figure. Thus, on top of the usual entrainment effects, where part of dripped neutrons are effectively immobilized by the periodic potential, there could be an additional contribution of quasiparticle resonances which are detectable only in microscopic superfluid calculations. To estimate the impact of quasiparticle resonances on the entrainment phenomenon, we calculate the total number of resonating neutrons per unit area, N_n^res. N_n^res is calculated by integrating densities of states above the chemical potential, but with relatively large occupation probabilities greater than 0.1, that may contribute to the quasiparticle resonances. To exclude the states having n_νk^(n)>0.1 in the tail of the global Fermi-Dirac-type distribution, only those exhibiting sudden changes of occupation probabilities in neighboring energies are regarded as quasiparticle resonances. The results for various densities, n_b=0.04, 0.05, 0.06, and 0.07 fm^-3, are summarized in Table. <ref>. In the 5th and 6th columns of the table, we show the total number of resonating neutrons per unit area within a single slab period a, N_n^res, and its ratio to the number of energetically-bound neutrons, N_n^res/N_n^e.b., respectively. From the results, we find that the number of resonating neutrons is only less than or around 1% of the number of energetically-bound neutrons in the system. We also find that the number of resonating neutrons decreases as the baryon number density increases. The latter observation can be explained by looking at the change of the potential. In the 2nd, 3rd, and 4th columns of the table, the minimum and the maximum values of the mean-field potential, U_n^min and U_n^max, respectively, and the chemical potential, λ_n, are presented. Because more and more neutrons are dripped out of the slabs as the baryon number density increases, the maximum value of the mean-field potential decreases substantially, while the minimum value is not so much affected. It means that the depth of the potential well becomes shallower for higher baryon number densities and, as a result, there are less bound orbitals which can contribute to quasiparticle resonances. §.§ Anti-entrainment effects In this section, we present results of time-dependent simulations to quantify the entrainment effects in the slab phase of neutron-star matter in the presence of superfluidity. We employ a real-time method that enables us to extract the collective mass of a slab from a dynamic response of the system to an external force, proposed in Ref. <cit.>. Here, we briefly recall the method. In the inner crust of neutron stars, where dripped superfluid neutrons permeate a crystalline structure of nuclear bundles, it is not at all obvious how to distinguish bound and unbound (free) neutrons. One may naively subtract background uniform neutron density to define that belongs to each cluster, or compare depth of a mean-field potential and single-particle energies to count the number of neutrons within the potential well <cit.>. In real situations, however, due to the self-organizing character of neutron star matter, i.e.there is no clear-cut separation between clusters and the others, those are served only as a naive estimation. In addition, part of dripped neutrons are immobilized or trapped by the periodic structure through the Bragg scattering. This is the so-called entrainment effect and the amount of the “effectively bound” neutrons is the matter of debate. We note, moreover, that superfluidity may also affect the collective mass of nuclear clusters <cit.>. It is thus a quite complicated problem and only fully self-consistent superfluid band calculations can provide conclusive results. The real-time method, proposed in Ref. <cit.>, offers an intuitive approach to extract the collective mass of a nuclear cluster immersed in neutron superfluid. Here we succinctly digest the essence of the approach, referring readers to Ref. <cit.> for details. In the latter approach, we exert an external force F_ext on protons which are well localized inside the cluster. The protons together with effectively bound (bound plus entrained) neutrons start moving towards the direction of the external force. If the force is constant and there are no redundant excitations other than the collective translational motion, the cluster would show a constant acceleration motion, F_ext=Ṗ_tot =M_clustera_p. Since protons are localized in space and the center-of-mass position of protons, Z_p(t)=1/N_p∫_0^a z n_p(z) z, is a well defined quantity. Thus, we can numerically compute the acceleration of the cluster, a_p(t)=^2Z_p/ t^2. Adopting a classical relation, we can evaluate the collective mass of the cluster as well as that of protons as follows: M_cluster = Ṗ_tot/a_p, M_p = Ṗ_p/a_p, where P_tot=P_p+P_n is the total linear momentum and P_p is the linear momentum of protons, P_p(t) = ħ∫_0^a j_z^(p)(z,t) dz. Since we have both the collective masses of the cluster and the protons, we can define the collective mass of effectively bound neutrons by M_n^eff.bound = M_cluster-M_p. The number density of conduction neutrons which are freely conduct can be quantified as follows: n_n^c = N_n-N_n^eff.bound/a, where N_n^eff.bound=M_n^eff.bound/m_n,bg^⊕ is the average number of the effectively bound neutrons. In this way, we can extract the conduction neutron number density from a dynamic response of the system to the external potential. In Fig. <ref>, we show obtained acceleration of protons, a_p(t), in panel (a), time-derivative of the total linear momentum, Ṗ_tot(t), in panel (b), and that of the proton linear momentum, Ṗ_p(t), in panel (c). Results obtained with superfluidity is shown by solid lines, while those obtained without superfluidity are shown by dotted lines. From t=0 to 2000 fm/c, we smoothly switch on the external potential. For t>2000 fm/c, the external potential is kept constant and, as a result, we observe that all those quantities become also almost constant, as expected. As is shown in Fig. <ref>(b), the classical relation, F_ext=Ṗ_tot holds nicely. From Figs. <ref>(a) and (c), we find that both a_p and Ṗ_p tend to be larger when we include the superfluidity. It means that the collective mass of the slab is lighter when we include superfluidity, which indicates that less neutrons are entrained via band structure effects. Using those quantities presented in Fig. <ref>, the collective mass of the slab (<ref>) and that of protons (<ref>) can be deduced. The results are shown in Fig. <ref>, where the ratio between M_slab and the mass of energetically-bound neutrons M_e.b. are shown in panel (a), and the ratio between M_p and the total mass of protons m_pN_p are shown in panel (b). From Fig. <ref>(b), we see that the collective mass of protons M_p is slightly reduced by about 5–9%. This reduction was already found in the previous work without superfluidity <cit.>, which could be explained by the density-dependent microscopic effective mass. Namely, protons behave as if they have a mass of m_p,b.g.^⊕≡ m_p^⊕[n_n,b.g.], where n_n,b.g. is the background neutron density. In stark contrast, in Fig. <ref>(a), we find that the collective mass of the slab is significantly reduced by about 57.5–82.5% from a naive estimation of bound neutrons which are within the potential well. Apparently, this significant reduction can not be explained solely by the density-dependent microscopic effective mass of neutrons. Note that if dripped neutrons were actually entrained to the slabs via band structure effects, the collective mass of the slab would be increased. Therefore, this counter-intuitive phenomenon is called “anti-entrainment” effects <cit.>. As discussed in the previous work <cit.>, the cause of the reduction can be found in time-evolution of current densities. In Fig. <ref>(a), we show neutron current densities obtained by time-dependent superfluid band calculations as a function of z coordinate for the n_b=0.05 fm^-3 case. The results are shown for four representative instances at t=1000 fm/c (solid line), 2000 fm/c (dashed line), 3000 fm/c (dotted line), and 4000 fm/c (dash-dotted line). From the figure, it is visible that neutrons around the slab (which was initially z=a/2=14.5 fm) move towards the direction of the external force (+z direction), because the current density is positive. On the other hand, the neutron current density outside the slab becomes negative, meaning that those dripped neutrons move towards opposite direction to the external force. Since the presence of the “counterflow” reduces not only the magnitude of the total linear momentum, but also its rate of increase Ṗ_tot, it results in the reduction of the collective mass of the slab. One could attribute the emergence of the counter flow to the band structure effects <cit.>. Namely, since the macroscopic effective mass depends on the second derivative of single-particle energy ε_νk^(n) with respect to the Bloch wave number k_z, it can be both positive and negative depending on the curvature of the band. Dripped neutrons in the band which is convex upward has negative macroscopic effective mass, and they respond towards the opposite direction to the external force. Based on the fully self-consistent superfluid time-dependent band calculations, here we showed that the anti-entrainment effects do present even with the inclusion of neutron superfluidity. From the extracted collective effective masses shown in Fig. <ref>, we can evaluate the conduction neutron number density, n_n^c (<ref>), and, thus, the macroscopic effective mass defined by m_n^⋆/m_n^⊕ = n_n^f/n_n^c. The results are summarized in Table <ref>. In the 2nd to 4th columns, results obtained with superfluidity are shown, while those without superfluidity are also given in 5th to 7th columns, for comparison. In both cases with and without superfluidity, the conduction neutron number density n_n^c is larger than the `free' neutron number density n_n^f, for all densities examined, n_b=0.04–0.07 fm^-3. It means that the band structure actually works like lubricant to mobilize dripped neutrons. As a result, the macroscopic effective mass m_n^⋆ becomes smaller than the bare neutron mass. By comparing 4th and 7th columns, we find that the inclusion of superfluidity slightly enhances the reduction by few %. Namely, the anti-entrainment effects do present irrespective of superfluity in the slab phase. § CONCLUSION AND PROSPECT In this work, we have developed a fully self-consistent time-dependent superfluid band theory for the inner crust of neutron stars. By applying the real-time method, where we measure the collective mass of a nuclear cluster immersed in neutron superfluid through a response of the system to an external force, we have successfully extracted the conduction neutron number density and the macroscopic effective mass of dripped neutrons. From the results, we have found that the dripped neutrons are actually mobilized by the band structure, that is, the conduction neutron number density is enhanced and the neutron effective mass is reduced. These results are consistent with recent self-consistent band calculations without superfluidity <cit.>. We have demonstrated that the neutron effective mass is reduced up to about 42% in the slab phase and superfluidity slightly enhances this anti-entrainment effect. As a next step, we have extended the present formalism to include finite temperature and magnetic field effects. It enables us to quantify, e.g., the melting temperature of the slabs, taking into account the band structure effects. We expect that such fully-microscopic finite-temperature calculations of nuclear pasta phases in a hot environment will be useful, e.g., for studying supernova matter or cooling of proto-neutron stars. Interesting and useful information about neutrino-pasta scattering, elastic properties, as well as neutron-star cooling will be obtained in our forthcoming works. Needless to say, it is highly desired to extend the present formalism to 2D and 3D geometries. We believe that the formalism itself is unchanged and can be extended to higher dimensions in a straightforward way. Thus, the major obstacle is the computational cost. As we mentioned in Sec. <ref>, we have already dealt with millions of quasiparticle wave functions for the 1D geometry. The extensions to 2D and 3D geometries would require tens to hundreds times larger number of quasiparticle orbitals and the number of numerical operations for each lattice point would also be increased. To avoid matrix diagonalizations, we may take advantage of shifted conjugate-orthogonal conjugate-gradient (COCG) <cit.> or shifted conjugate-orthogonal conjugate-residual (COCR) <cit.> methods. It has been shown that these methods are particularly suitable to GPU parallelization. In addition, TDSLDA has also been shown remarkable successes with the use of top-tier supercomputers working with GPUs <cit.>. The use of GPUs would enable us to realize fully self-consistent superfluid band calculations for 2D and 3D geometries and to resolve the controversial situation concerning the entrainment effects in the inner crust of neutron stars. We are grateful to Takashi Nakatsukasa (University of Tsukuba) and Kenichi Yoshida (RCNP, Osaka University) for valuable discussions. This work mainly used computational resources of the Yukawa-21 supercomputer at Yukawa Institute for Theoretical Physics (YITP), Kyoto University. This work also used (in part) computational resources of the HPCI system (Grand Chariot) provided by Information Initiative Center (IIC), Hokkaido University, through the HPCI System Project (Project ID: hp230180). This work is supported by JSPS Grant-in-Aid for Scientific Research, Grant Nos. 23K03410 and 23H01167. 99 DFT1 P. Hohenberg and W. Kohn, Inhomogeneous Electron Gas, Phys. Rev. 136, B864 (1964). DFT2 W. Kohn and L.J. Sham, Self-Consistent Equations Including Exchange and Correlation Effects, Phys. Rev. 140, A1133 (1965). DFT3 E. Runge and E.K.U. Gross, Density-Functional Theory for Time-Dependent Systems, Phys. Rev. Lett. 52 (1984) 997. DFT4 W. Kohn, Nobel Lecture: Electronic structure of matter—wave functions and density functionals, Rev. Mod. Phys. 71, 1253 (1999). Yabana(1996) K. Yabana and G.F. Bertsch, Time-dependent local-density approximation in real time, Phys. Rev. B 54, 4484 (1996). Bertsch(2000) G.F. Bertsch, J.-I. Iwata, A. Rubio, and K. Yabana, Real-space, real-time method for the dielectric function, Phys. Rev. B 62, 7998 (2000). Yabana(2012) K. Yabana, T. Sugiyama, Y. Shinohara, T. Otobe, and G.F. Bertsch, Time-dependent density functional theory for strong electromagnetic fields in crystalline solids, Phys. Rev. B 85, 045134 (2012). Yamada(2019) A. Yamada and K. Yabana, Multiscale time-dependent density functional theory for a unified description of ultrafast dynamics: Pulsed light, electron, and lattice motions in crystalline solids, Phys. Rev. B 99, 245103 (2019). Octopus X. Andrade, D. Strubbe, U. De Giovannini, A. Hjorth Larsen, M.J.T. Oliveira, J. Alberdi-Rodriguez, A. Varas, I. Theophilou, N. Helbig, M.J. Verstraete, L. Stella, F. Nogueira, A. Aspuru-Guzik, A. Castro, M.A.L. Marques, and A. Rubio, Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems, Phys. Chem. Chem. Phys. 17, 31371 (2015); <https://octopus-code.org/wiki/Main_Page>. SALMON M. Noda, S.A. Sato, Y. Hirokawa, M. Uemoto, T. Takeuchi, S. Yamada, A. Yamada, Y. Shinohara, M. Yamaguchi, K. Iida, I. Floss, T. Otobe, K.-M. Lee, K. Ishimura, T. Boku, G.F. Bertsch, K. Nobusada, and K. Yabana, SALMON: Scalable Ab-initio Light-Matter simulator for Optics and Nanoscience, Compt. Phys. Commun. 235, 356 (2019); <https://salmon-tddft.jp/>. Carter(2005) B. Carter, N. Chamel, and P. Haensel, Entrainment coefficient and effective mass for conduction neutrons in neutron star crust: simple microscopic models, Nucl. Phys. A748, 675 (2005). Chamel(2005) N. Chamel, Band structure effects for dripped neutrons in neutron star crust, Nucl. Phys. A747, 109 (2005). Chamel(2006) N. Chamel, Effective mass of free neutrons in neutron star crust, Nucl. Phys. A773, 263 (2006). Chamel(2007) N. Chamel, S. Naimi, E. Khan, and J. Margueron, Validity of the Wigner-Seitz approximation in neutron star crust, Phys. Rev. C 75, 055806 (2007). Chamel(2012) N. Chamel, Neutron conduction in the inner crust of a neutron star in the framework of the band theory of solids, Phys. Rev. C 85, 035801 (2012). Kashiwaba(2019) Yu Kashiwaba and T. Nakatsukasa, Self-consistent band calculations of the slab phase in the neutron-star crust, Phys. Rev. C 100, 035804 (2019). Sekizawa(2022) K. Sekizawa, S. Kobayashi, and M. Matsuo, Time-dependent extension of the self-consistent band theory for neutron star matter: Anti-entrainment effects in the slab phase, Phys. Rev. C 105, 045807 (2022). Oyamatsu(1994) K. Oyamatsu and Y. Yamada, Shell energies of non-spherical nuclei in the inner crust of a neutron star, Nucl. Phys. A578, 184 (1994). Onsi(2008) M. Onsi, A.K. Dutta, H. Chatri, S. Goriely, N. Chamel, and J.M. Pearson, Semi-classical equation of state and specific-heat expressions with proton shell corrections for the inner crust of a neutron star, Phys. Rev. C 77, 065805 (2008). Chamel(2017) N. Chamel, Entrainment in Superfluid Neutron-Star Crusts: Hydrodynamic Description and Microscopic Origin, J. Low Temp. Phys. 189, 328 (2017). Andersson(2012) N. Andersson, K. Glampedakis, W.C.G. Ho, and C.M. Espinoza, Pulsar Glitches: The Crust is not Enough, Phys. Rev. Lett. 109, 241103 (2012). Chamel(2013)glitch N. Chamel, Crustal Entrainment and Pulsar Glitches, Phys. Rev. Lett. 110, 011101 (2013). Haskell(2015) B. Haskell and A. Melatos, Models of pulsar glitches, Int. J. Mod. Phys. D 24, 153008 (2015). Chamel(2009) N. Chamel, J. Margueron, and E. Khan, Neutron specific heat in the crust of neutron stars from the nuclear band theory, Phys. Rev. C 79, 012801(R) (2009). Chamel(2013) N. Chamel, D. Page, and S. Reddy, Low-energy collective excitations in the neutron star inner crust, Phys. Rev. C 87, 035803 (2013). Kobyakov(2013) K. Kobyakov and C.J. Pethick, Dynamics of the inner crust of neutron stars: Hydrodynamics, elasticity, and collective modes, Phys. Rev. C 87, 055803 (2013). Kobyakov(2016) K. Kobyakov and C.J. Pethick, Nucleus-nucleus interactions in the inner crust of neutron stars, Phys. Rev. C 94, 055806 (2016). Durel(2018) D. Durel and M. Urban, Long-wavelength phonons in the crystalline and pasta phases of neutron-star crusts, Phys. Rev. C 97, 065805 (2018). Sauls(2020) J.A. Sauls, N. Chamel, and M.A. Alpar, Superfluidity in Disordered Neutron Star Crust, arXiv:2001.09959 [astro-ph.HE]. Carter(2005)BCS B. Carter, N. Chamel, and P. Haensel, Effects of BCS pairing on entrainment in neutron superfluid current in neutron star crust, Nucl. Phys. A759, 441 (2005). Watanabe(2017) G. Watanabe and C.J. Pethick, Superfluid Density of Neutrons in the Inner Crust of Neutron Stars: New Life for Pulsar Glitch Models, Phys. Rev. Lett. 119, 062701 (2017). Minami(2022) Y. Minami and G. Watanabe, Effects of pairing gap and band gap on superfluid density in the inner crust of neutron stars, Phys. Rev. Research 4, 033141 (2022). Bulgac(2012) A. Bulgac, P. Magierski, and M.M. Forbes, The Unitary Fermi Gas: From Monte Carlo to Density Functionals, in: BCS-BEC Crossover and the Unitary Fermi Gas (ed.) W. Zwerger; Lecture Notes in Physics, Vol. 836, pp. 305–373 (Springer, Heidelberg, 2012). Bulgac(2019) A. Bulgac, Time-Dependent Density Functional Theory for Fermionic Superfluids: From Cold Atomic Gases–To Nuclei and Neutron Stars Crust, Phys. Status Solidi B 256, 1800592 (2019). Oliveira(1988) L.N. Oliveira, E.K.U. Gross, and W. Kohn, Density-Functional Theory for Superconductors, Phys. Rev. Lett. 60, 2430 (– Published 6 June )1988). Wacker(1994) O.-J. Wacker, R. Kümmel, and E.K.U. Gross, Time-Dependent Density-Functional Theory for Superconductors, Phys. Rev. Lett. 73, 2915 (1994). Kurth(1999) S. Kurth, M. Marques, M. Lüders, and E.K.U. Gross, Local Density Approximation for Superconductors, Phys. Rev. Lett. 83, 2628 (1999). Bulgac(2002)1 A. Bulgac and Y. Yu, Renormalization of the Hartree-Fock-Bogoliubov Equations in the Case of a Zero Range Pairing Interaction, Phys. Rev. Lett. 88, 042504 (2002). Bulgac(2002)2 A. Bulgac, Local density approximation for systems with pairing correlations, Phys. Rev. C 65, 051305(R) (2002). Bulgac(2009) A. Bulgac and S. Yoon, Large Amplitude Dynamics of the Pairing Correlations in a Unitary Fermi Gas, Phys. Rev. Lett. 102, 085302 (2009). Bulgac(2011) A. Bulgac, Y.-L. Luo, P. Magierski, K.J. Roche, and Y. Yu, Real-Time Dynamics of Quantized Vortices in a Unitary Fermi Superfluid, Science 332, 1288 (2011). Ashcroft-Mermin N.W. Ashcroft, N.D. Mermin, and D. Wei, Solid State Physics Revised Edition (Cengage Lerning Asia Pte Ltd, 2016). Martin(2016) N. Martin and M. Urban, Superfluid hydrodynamics in the inner crust of neutron stars, Phys. Rev. C 94, 065801 (2016). Ravenhall(1983) D.G. Ravenhall, C.J. Pethick, and J.R. Wilson, Structure of Matter below Nuclear Saturation Density, Phys. Rev. Lett. 50, 2066 (1983). Hashimoto(1984) M. Hashimoto, H. Seki, and M. Yamada, Shape of Nuclei in the Crust of Neutron Star, Prog. Theor. Phys. 71, 320 (1984). Anderson(1975) P.W. Anderson and N. Itoh, Pulsar glitches and restlessness as a hard superfluidity phenomenon, Nature 256, 25 (1975). Negele(review) J.W. Negele, The mean-field theory of nuclear structure and dynamics, Rev. Mod. Phys. 54, 913 (1982). BKN(1976) P. Bonche, S. Koonin, and J.W. Negele, One-dimensional nuclear dynamics in the time-dependent Hartree-Fock approximation, Phys. Rev. C 13, 1226 (1976). Nakatsukasa(PTEP) T. Nakatsukasa, Density functional approaches to collective phenomena in nuclei: Time-dependent density functional theory for perturbative and non-perturbative nuclear dynamics, Prog. Theor. Exp. Phys. 2012, 01A207 (2012). Simenel(review) C. Simenel, Nuclear quantum many-body dynamics, Eur. Phys. J. A 48, 152 (2012). Nakatsukasa(review) T. Nakatsukasa, K. Matsuyanagi, M. Matsuo, and K. Yabana, Time-dependent density-functional description of nuclear dynamics, Rev. Mod. Phys. 88, 045004 (2016). TDHF-review(2018) C. Simenel and A.S. Umar, Heavy-ion collisions and fission dynamics with the time-dependent Hartree-Fock theory and its extensions, Prog. Part. Nucl. Phys. 103, 19 (2018). Stevenson(2019) P.D. Stevenson and M.C. Barton, Low-energy heavy-ion reactions and the Skyrme effective interaction, Prog. Part. Nucl. Phys. 104, 142 (2019). Sekizawa(2019) K. Sekizawa, TDHF Theory and Its Extensions for Multinucleon Transfer Reactions: A Mini Review, Front. Phys. 7, 20 (2019). Anderson(1967) P.W. Anderson, Infrared Catastrophe in Fermi Gases with Local Scattering Potentials, Phys. Rev. Lett. 18, 1049 (1967). Lesinski(2007) T. Lesinski, M. Bender, K. Bennaceur, T. Duguet, and J. Meyer, Tensor part of the Skyrme energy density functional: Spherical nuclei, Phys. Rev. C 76, 014312 (2007). Kortelainen(2010) M. Kortelainen, R.J. Furnstahl, W. Nazarewicz, and M.V. Stoitsov, Natural units for nuclear energy density functional theory, Phys. Rev. C 82, 011304(R) (2010). Chabanat(1998) E. Chabanat, P. Bonche, P. Haensel, J. Meyer, and R. Schaeffer, A Skyrme parametrization from subnuclear to neutron star densities Part II. Nuclei far from stabilities, Nucl. Phys. A635, 231 (1998); A643, 441 (1998). Baran(2008) A. Baran, A. Bulgac, M.M. Forbes, G. Hagen, W. Nazarewicz, N. Schunck, and M.V. Stoitsov, Broyden's method in nuclear structure calculations, Phys. Rev. C 78, 014318 (2008). Papakonstantinou(2013) P. Papakonstantinou, J. Margueron, F. Gulminelli, and Ad.R. Raduta, Densities and energies of nuclei in dilute matter at zero temperature, Phys. Rev. C 88, 045805 (2013). Jin(2017) S. Jin, A. Bulgac, K. Roche, and G. Wlazłowski, Coordinate-space solver for superfluid many-fermion systems with the shifted conjugate-orthogonal conjugate-gradient method, Phys. Rev. C 95, 044302 (2017). Kashiwaba(2020) Y. Kashiwaba and T. Nakatsukasa, Coordinate-space solver for finite-temperature Hartree-Fock-Bogoliubov calculations using the shifted Krylov method, Phys. Rev. C 101, 045804 (2020). Jin(2021) S. Jin, K.J. Roche, I. Stetcu, I. Abdurrahman, and A. Bulgac, The LISE package: Solvers for static and time-dependent superfluid local density approximation equations in three dimensions, Comput. Phys. Commun. 269, 108130 (2021). 0 Bender(2003) M. Bender, P.-H. Heenen, and P.-G. Reinhard, Self-consistent mean-field models for nuclear structure, Rev. Mod. Phys. 75, 121 (2003). Daniel(2021) D. Pęcak, N. Chamel, P. Magierski, and G. Wlazłowski, Properties of a quantum vortex in neutron matter at finite temperatures, Phys. Rev. C 104, 055801 (2021). Schuetrumpf(2013) B. Schuetrumpf, M.A. Klatt, K. Iida, J.A. Maruhn, K. Mecke, and P.-G. Reinhard, Time-dependent Hartree-Fock approach to nuclear “pasta” at finite temperature, Phys. Rev. C 87, 055805 (2013). Schuetrumpf(2014) B. Schuetrumpf, K. Iida, J.A. Maruhn, and P.-G. Reinhard, Nuclear “pasta matter” for different proton fractions, Phys. Rev. C 90, 055802 (2014). Schuetrumpf(2015)1 B. Schuetrumpf, M.A. Klatt, K. Iida, G.E. Schröder-Turk, J.A. Maruhn, K. Mecke, and P.-G. Reinhard, Appearance of the single gyroid network phase in “nuclear pasta” matter, Phys. Rev. C 91, 025801 (2015). Schuetrumpf(2015)2 B. Schuetrumpf and W. Nazarewicz, Twist-averaged boundary conditions for nuclear pasta Hartree-Fock calculations, Phys. Rev. C 92, 045806 (2015). VNdyn(2016) G. Wlazłowski, K. Sekizawa, P. Magierski, A. Bulgac, and M.M. Forbes, Vortex Pinning and Dynamics in the Neutron Star Crust, Phys. Rev. Lett. 117, 232701 (2016). Miller(2019) M.C. Miller, C. Chirenti, and T.E. Strohmayer, On the Persistence of QPOs during the SGR 1806-20 Giant Flare, Astro. Phys. J 871, 95 (2019). Sotani(2019) H. Sotani, K. Iida, and K. Oyamatsu, Astrophysical implications of double-layer torsional oscillations in a neutron star crust as a lasagna sandwich, MNRAS 489, 3022 (2019).
http://arxiv.org/abs/2306.10983v2
20230619145024
Effect-Invariant Mechanisms for Policy Generalization
[ "Sorawit Saengkyongam", "Niklas Pfister", "Predrag Klasnja", "Susan Murphy", "Jonas Peters" ]
stat.ML
[ "stat.ML", "cs.LG" ]
A VAE Approach to Sample Multivariate Extremes Nicolas Lafon [email protected] Laboratoire des Sciences du Climat et de l'Environnement ESTIMR, CNRS-CEA-UVSQ Gif-sur-Yvette, France Philippe Naveau [email protected] Laboratoire des Sciences du Climat et de l'Environnement ESTIMR, CNRS-CEA-UVSQ Gif-sur-Yvette, France Ronan Fablet [email protected] IMT Atlantique, Lab-STICC ODISSEY, INRIA Brest, France May 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Policy learning is an important component of many real-world learning systems. A major challenge in policy learning is how to adapt efficiently to unseen environments or tasks. Recently, it has been suggested to exploit invariant conditional distributions to learn models that generalize better to unseen environments. However, assuming invariance of entire conditional distributions (which we call full invariance) may be too strong of an assumption in practice. In this paper, we introduce a relaxation of full invariance called effect-invariance (e-invariance for short) and prove that it is sufficient, under suitable assumptions, for zero-shot policy generalization. We also discuss an extension that exploits e-invariance when we have a small sample from the test environment, enabling few-shot policy generalization. Our work does not assume an underlying causal graph or that the data are generated by a structural causal model; instead, we develop testing procedures to test e-invariance directly from data. We present empirical results using simulated data and a mobile health intervention dataset to demonstrate the effectiveness of our approach. § INTRODUCTION When learning models from data, we often use these models in scenarios that are assumed to have similar or the same characteristics as the ones generating the training data. This holds for prediction tasks such as regression and classification but also for settings such as contextual bandits or dynamic treatment regimes. When we observe different regimes under training, we can hope to exploit this information to construct models that adapt better to an unseen environment (or task). Such problems are usually referred to as multi-task learning domain adaptation or domain generalization <cit.>; the nomenclature sometimes differs depending on whether one observes labeled and/or unlabeled data in the test domain. For prediction tasks, it has been suggested to learn invariant models by exploiting invariance of the conditional distributions. Under suitable assumptions, such models generalize better to unseen environments if the changes between the environments can be modeled by interventions <cit.>. A similar approach has been applied in policy learning <cit.>, where one searches for policies that yield an invariant reward distribution. We refer to the invariance of conditional distributions as `full invariance'. More precisely, given covariates X_e and outcome Y_e from different environments e ∈, the full invariance assumption posits the existence of a set of covariates X_e^S such that for all e, f ∈: Y_e|X_e^S and Y_f | X_f^S are identical. Full invariance, however, may be too strong of an assumption in practice. In prediction tasks, it has been suggested to relax the requirement of full invariance, such as vanishing empirical covariance, and instead use invariance as a form of regularization <cit.>. This approach comes with theoretical guarantees regarding generalization to bounded interventions, for example, but these results are often limited to restricted classes of models and interventions. In this paper, we relax the full invariance assumption in a different direction and show how it can be applied to inferring optimal conditional treatments in policy learning. We illustrate our proposed relaxation based on an example. Consider the following class of structural causal models <cit.> indexed by environments e ∈{1, -1}, with the corresponding graph shown in Figure <ref>, 𝒮(e): Uϵ_U X e U + ϵ_X T(1 + X + ϵ_T > 0) Y 2e + X + U + T(1 + X) + ϵ_Y, where (ϵ_U, ϵ_X,ϵ_T,ϵ_Y) are independent standard normal random variables. Here, Y ∈ represents the outcome or reward, T ∈{0,1} corresponds to the treatment or action, and X ∈ and U ∈ are observed and unobserved covariates, respectively. The mechanism for T can be considered as a fixed policy. Since the environment has a direct effect on the outcome, there is no subset satisfying the full invariance condition (<ref>): regardless of whether we condition on ∅ or {X}, the outcome distribution is not independent of the environment. Consequently, methods that rely on the full invariance assumption such as the one proposed by <cit.> would lead to a vacuous result. However, the criterion of full invariance is not necessary when the goal is to learn an optimal policy. Instead it may suffice to find models that are partially invariant: In the above example, see (<ref>), the outcome Y can be additively decomposed into two components: one being a function of U, e, and ϵ_Y, and another being a function of T and X. In this case, although the outcome mechanism is not entirely invariant, it contains an invariant component. When conditioning on X, the effect of the treatment is the same in all environments. More specifically, the conditional average treatment effect does not depend on e, that is, ∀ x ∈: ^e[Y | X = x, T = 1] - ^e[Y | X, T = 0] = 1 + x. We say that {X} satisfies effect-invariance (). This condition suffices that, for an unseen test environment, we can still infer the optimal treatment among policies that only depend on X without having access to the outcome information in the test environment. In addition, if the environments are heterogeneous enough, such a policy is worst-case optimal. We refer to this setup as zero-shot generalization. We state the class of data generating processes and provide formal results in Section <ref> and Section <ref> below. Moreover, if we can acquire a small sample – including observations of the outcome – from the test environment, we would want to optimize the policy using the data from the test environment. Ideally, this optimization also leverages information from training data from other environments to improve the finite sample performance of the learnt policy. We discuss that e-invariant information can be beneficial in such settings. We refer to this scenario as few-shot generalization and present it as an extension of the zero-shot methodology, in Section <ref>. While SCMs provide a class of examples satisfying the assumptions of this work, we do not assume an underlying causal graph or SCM (but instead only require a sequential sampling procedure that ensures that the covariates X causally precede the outcome). In particular, e-invariance is not read off from a known graph but instead tested from data. Figure <ref> illustrates the testing result obtained by applying one of the proposed e-invariance tests to a sample from (<ref>), where we also include a comparison with the full-invariance test as proposed in <cit.>. The main contributions of this paper are four fold: (1) Introducing e-invariance: In Section <ref>, we introduce the concept of e-invariance, which offers a relaxation of the full invariance assumption. An e-invariant set ensures that the conditional treatment effect function remains the same across different environments. (2) Utilizing e-invariance for generalization: Section <ref> discusses the use of e-invariance in learning policies that provably generalize well to unseen environments. We prove two generalization guarantees: The proposed method (i) outperforms an optimal context-free policy on new environments and (ii) outperforms any other policy in terms of worst-case performance. (3) Methods for testing e-invariance: We propose hypothesis testing procedures, presented in Section <ref>, to test for e-invariance from data within both linear and nonlinear model classes. (4) Semi-real-world case study: In Section <ref>, we demonstrate the effectiveness of our proposed policy learning methods in the semi-real-world case study of mobile health interventions. An optimal policy based on an e-invariance set is shown to generalize better to new environments than the policy that uses all the context information. §.§ Further Related Work Our work builds upon the existing research that leverages the invariance of conditional distributions (full invariance) for generalization to unseen environments <cit.>. Several relaxations of the full invariance have been suggested for the prediction tasks <cit.>. In reinforcement learning, previous studies have suggested the use of invariance to achieve generalizable policies <cit.>, however, they lack theoretical guarantees for generalization. Closely related to our work, <cit.> has established the worst-case optimality of invariant policy learning based on the full invariance assumption, which may be too restrictive in practice. Transportability in causal inference <cit.> addresses the task of identifying invariant distributions based on a known causal graph and structural differences between environments, which can be used to generalize causal findings. However, our approach differs in that we do not assume prior knowledge of the causal graph or structural differences between environments. Furthermore, our methods are applicable even if the data generating process does not allow for a graphical representation. Instead, we develop testing procedures to obtain invariant information from data. Additionally, methods based on causal graphs typically only capture full invariance information (through the Markov property), whereas our work relaxes the requirement of full invariance for policy learning. § EFFECT-INVARIANCE §.§ Multi-environment policy learning In this work, we consider the problem of multi-environment policy learning (or multi-environment contextual bandit) <cit.>. Given a fixed set of environments , we assume that for each environment e∈, there is a policy learning setup, where the distributions of covariates and outcome may differ between environments. Each of the setups is modelled by a three-step sequential sampling scheme: First, covariates (X, U) are sampled according to a fixed distribution depending on the environment, then X is revealed to an agent that uses it to select a treatment T (from a finite set 𝒯) according to a policy π and, finally, an outcome Y is sampled conditionally on X, U and T. Formally, we assume the following setting throughout the paper. [Multi-environment policy learning] Let ⊂ be a collection of environments, Y∈ an outcome variable, X ∈𝒳⊆^d observed covariates, U ∈𝒰⊆^p unobserved covariates and T∈ = {1,…,k} a treatment. Let Δ() denote the probability simplex over the set of treatments and let Π{π|π:𝒳→Δ()} denote the set of all policies. Moreover, for all e ∈ let ^e_X,U be a distribution on 𝒳×𝒰 and for all e∈, x∈𝒳, u∈𝒰 and t∈ let ^e_Y | X = x, U = u, T = t be a distribution on . Given e∈ and π∈Π, this defines a random vector (Y, X, U, T) by (X,U)∼^e_X,U, T∼π(X), and Y ∼^e_Y | X = X, U = U, T = T, see Figure <ref>(a) for an example. Correspondingly, n observations (Y_i, X_i, T_i, e_i, π_i)_i=1^n from this model are generated by the following steps. (1) Select an environment e_i∈ and a policy π_i∈Π. (2) Sample covariates (X_i,U_i)∼^e_i_X,U. (3) Sample the treatment T_i∼π_i(X_i). (4) Sample the outcome Y_i ∼^e_i_Y | X = X_i, U = U_i, T = T_i.[Consequently, the distribution in (4) is indeed the conditional distribution of Y, given X, U, and T, justifying the notation.] The sampling in (2)–(4) is done independently for different i. In particular, we assume that e_i and π_i do not depend on other observables and should be considered fixed. (Our results in Section <ref> remain valid even if π_i depends on previous observations {j: j ≤ i}, see Remark <ref>.) Further denote by ⊆ the set of observed environments within the n training observations and for each e∈ we denote by n_e the number of observations from environment e. We assume that there exists a product measure ν such that for all e∈ the joint distribution of (Y, X, U, T) in environment e, under policy π has density p^e,π with respect to ν and that ^e_X has full support on 𝒳. Next, we define t_0 ∈ as a baseline treatment, which serves as the reference point for defining the conditional average treatment effect in (<ref>). However, and importantly, our results hold for any choice of t_0. Finally, we assume that the policies generating the training observations are bounded, i.e., for all i ∈{1,…,n}, t ∈, and x ∈ it holds that π_i(x)(t) > 0. Notation When writing probabilities and expectations of the random variables Y, X, U and T or the corresponding observations, we use superscripts to make explicit any possible dependence on the environment and policy, e.g., ^e,π and 𝔼^e,π. Moreover, by a slight abuse of notation, for a policy π∈Π with a density, we let π(x) denote the density rather than the distribution; we also use the commonly employed convention π(t|x) π(x)(t). Finally, for all t∈, we denote by π_t∈Π the policy that always selects treatment t, that is, π_t(·|x)=(t=·). Our results in Section <ref> remain valid even if π_i in Setting <ref> depends on previous observations. In this case, the sampling step (3) is replaced by T_i ∼π_i(X_i,H_i) with H_i:={(X_j,Y_j,T_j): j < i}. Furthermore, in the <ref> setting in Section <ref>, we consider D^ (y^_i, x^_i, t^_i, π^_i(·|·,h_i), e^_i)_i=1^n, where (y^_1, x^_1, t^_1), …, (y^_n, x^_n, t^_n) are (jointly independent) realizations from Q^_1 ^e_1,π_1(·|·)_X, Y, T, Q^_2 ^e_2,π_2(·|·, h_2)_X, Y, T, …, Q^_n ^e_n,π_n(·|·, h_n)_X, Y, T respectively, with h_i := {(y^_j, x^_j, t^_j): j < i} for i ≥ 2; in Appendix <ref>, we replace π_i by π_i(·|·, h_i). §.§ Invariant treatment effects The concept of invariance has been connected to causality <cit.> and it has been suggested to use it for causal discovery <cit.> or distribution generalization <cit.>. In our setting, the standard notion of invariance would correspond to the invariance in the outcome mechanism <cit.>. In practice, this notion may be too strong. E.g., it does not hold if the environment directly influences the outcome (see Figure <ref> for an example). In what follows, we introduce the notion of (treatment) effect-invariance, which relaxes the standard invariance condition. To this end, we recall the notion of the conditional average treatment effect (CATE) under different environments e ∈. The CATE in environment e∈ for a subset of covariates S⊆{1,…,d} is defined for all x∈^S and t∈ as τ^S_e(x, t) ^e, π_t[Y | X^S=x] - ^e, π_t_0[Y | X^S=x]. When S = {1,…,d}, we simply denote τ^S_e by τ_e. In Setting <ref>, the CATE functions, as defined in (<ref>), may differ substantially from one environment to another. But even then, there may exist a subset S ⊆{1,…,d} such that the CATE functions do not change across environments. In this work, we exploit the existence of such sets, which we call e-invariant (for effect-invariant).[As an alternative to e-invariance, one could define argmax-invariance by requiring that ∀ e_1,e_2 ∈: _t∈τ^S_e_1(·, t) ≡_t∈τ^S_e_2(·, t). A similar notion called `invariant action prediction' has been introduced by <cit.>. This condition would ensure that the optimal treatment is robust with respect to changes in the environment (even though the treatment effect may not be). E-invariance implies argmax-invariance but the latter condition is not sufficient to show generalization properties that we develop in Section <ref>.] Assume Setting <ref>. A subset S ⊆{1,…,d} is said to be effect-invariant with respect to a set of environments ⊆ ( for short) if the following holds ∀ e_1,e_2 ∈: τ^S_e_1≡τ^S_e_2. For any ⊆, we denote by ^_ the collection of all e-invariant sets . The above definition does not depend on the choice of t_0 in Setting <ref>: if condition (<ref>) holds for one t_0, it holds for all t_0 ∈. In this work, we focus on discrete treatments but, in principle, one could consider the continuous case by defining the CATE function as (x, t) ↦t^e,π^t[Y | X^s =x] and define the effect-invariance analogously to (<ref>). We now provide a characterization for e-invariance based on the outcome mechanism. Assume Setting <ref>. A subset S ⊆{1,…,d} is e-invariant if and only if there exists a pair of functions ψ_S: ^S ×→ and ν_S: ^S ×→ such that ∀ e ∈, ∀ x ∈^S, ∀ t ∈: ^e, π_t[Y | X^S = x] = ψ_S(x, t) + ν_S(x, e), and ψ_S(· ,t_0) ≡ 0. In particular, we have for all e ∈ that ψ_S ≡τ^S_e. See Appendix <ref>. The two equivalent conditions (<ref>) and (<ref>) provide two different viewpoints on e-invariant sets. The former shows that, when conditioning on an e-invariant set S, the CATE functions are invariant across environments, while the latter ensures that part of the conditional expected outcome ^e, π_t[Y | X^S] remains invariant across environments. In particular, the conditional expected outcome ^e, π_t[Y | X^S] can be additively decomposed into a fixed effect-modification term (ψ_S) that depends on the treatment and an environment-varying main-effect term (ν_S) that does not depend on the treatment. Here, the additivity stems from the definition of the CATE; different causal contrasts correspond to other forms of decomposition. Most of the results in the remaining sections of our work rely on the existence of an e-invariant set. We therefore make this assumption explicit. In Setting <ref>, there exists a subset S ⊆{1,…, d} such that S is e-invariant . The subsequent section connects Assumption <ref> to a class of structural causal models <cit.>. For such models, proposition <ref> below shows that Assumption <ref> is satisfied if the outcome mechanism is of a specific form and an independence assumption holds. Furthermore, using a test for e-invariance, see Section <ref>, Assumption <ref> is testable from data for the observed environments . §.§ Effect-invariance in structural causal models Assumption <ref> is satisfied in a restricted class of structural causal models (SCMs). Formally, we consider the following class of SCMs inducing the sequential sampling steps (2)–(4) in Setting <ref>. 𝒮(e, π): U s_e(X, U, ϵ_U) X h_e(X, U, ϵ_X) Tℓ_π(X, ϵ_T) Y f(X^_f,X, U^_f,U, T) + g_e(X, U, ϵ_Y), where (U, X, T, Y)∈𝒳×××ℝ, (ϵ_U, ϵ_X,ϵ_T,ϵ_Y) are jointly independent noise variables, (s_e, h_e, g_e)_e ∈, f and ℓ_π are measurable functions such that, for all x∈, ℓ_π(x, ϵ_T) is a random variable on with distribution π(x), and _f,X⊆{1,…,d} and _f,U⊆{1,…,p}. We call _f,X and _f,U the observed and unobserved policy-relevant parents, respectively. To determine whether e-invariance holds, it is helpful to distinguish between the parents of Y that enter f (these are relevant to determine optimal policies) and those parents of Y that enter into g_e. For building intuition, we therefore define a graphical representation, which splits Y into two nodes (visually, the graphical representation is similar to SWIGs <cit.>, and we use `tikz-swigs' LaTeX package for drawing the graph; the interpretation, however, is different). We represent a class of SCMs of the form (<ref>) by an e-invariance graph. This graph contains, as usually done when representing SCMs graphically, a directed edge from variables on the right-hand side of assignments to variables on the left-hand side, but with the exception that e is represented by a square node and the node Y is split into a part for Y_f and a part for Y_g; see Example <ref> and also Figure <ref>. Consider the following SCMs 0.49 𝒮(e, π): U^1 ϵ_U^1, U^2 ϵ_U^2 X^3 γ^3_e U^1 + ϵ_X^3 X^2 γ^2_e U^2 + ϵ_X^2 X^1 X^2 + γ^1_e U^1 + ϵ_X^1 T ℓ_π(X^1, X^2, X^3, ϵ_T) Y T (1 + 0.5X^2 + 0.5 U^1)_f + μ_e + U^1 + U^2 + X^2 + X^3 + ϵ_Y_g_e, 0.49 [node distance=1.5cm, thick, roundnode/.style=circle, draw, inner sep=1pt,minimum size=7mm, squarenode/.style=rectangle, draw, inner sep=1pt, minimum size=7mm] [roundnode] (X1) at (-1.5, 1.5) X^1; [roundnode] (X2) at (-1.5, 0)X^2; [roundnode] (X3) at (0, 1.5) X^3; [roundnode][fill=black!25] (U) at (1.5,2) U^1; [roundnode][fill=black!25] (U2) at (1.5,-1) U^2; [squarenode] (E) at (-2.75, 0)e; [roundnode] (A) at (0, 0) T; [name=R,shape=swig vsplit,swig hsplit=gap=7pt, right=.7 of A] leftY_f rightY_g; [shift=(R.center)] plot[mark=+] coordinates(0,0); [-latex] (E) edge (X2); [-latex] (X2) – (X1); [-latex] (E) – (X1); [-latex] (E) – (X3); [-latex] (X2) edge[bend right=30] (R); [-latex] (E) edge[bend right=70] (R.330); [-latex] (U) – (X3); [-latex] (X3) edge[bend left=30] (R.40); [-latex] (X1) – (A); [-latex] (X2) – (A); [-latex] (X3) – (A); [-latex, thick] (U) edge[bend right=30] (X1); [-latex, thick] (A) – (R); [-latex, thick] (U) – (R.140); [-latex, thick] (U2) edge[bend right=20] (R.320); [-latex, thick] (U2) edge[bend left=30] (X2); where = {0,1}, ϵ_U^1, ϵ_U^2, ϵ_X^1, ϵ_X^2, ϵ_X^3, ϵ_T, ϵ_Y are jointly independent noise variables with mean zero, and γ^1_e, γ^2_e, γ^3_e, μ_e are environment-specific parameters. Here, _f,X = {2} and _f,U = {1} are the policy-relevant parents; the e-invariance graph is shown on the right. While in this example, the environment changes the coefficients γ^1_e, γ^2_e, γ^3_e and μ_e, the generality of (<ref>) allows for a change in the noise distributions, too. Under the class of SCMs (<ref>), the following proposition shows that an e-invariant set exists if the unobserved U^_f,U and the observed policy-relevant parents X^_f,X are independent, and the environments do not influence U^_f,U. Assume Setting <ref> and that the sequential sampling steps (2)–(4) are induced by the SCMs in (<ref>). If (i) for all e ∈, U^_f,U X^_f,X in ^e_X,U and (ii) ^e_U^_f,U are identical across e ∈, we have that _f,X is e-invariant . See Appendix <ref>. [Example <ref> continued] Let e ∈. In this example, it holds that U^_f,U X^_f,X in ^e_X,U. Therefore, _f,X = {2} satisfies the e-invariance condition (<ref>) by Proposition <ref>. To illustrate this, consider the expected outcome conditioned on X^2, ^e, π_t[Y | X^2] = ^e, π_t[T (1 + 0.5X^2 + 0.5 U^1) + μ_e + U^1 + U^2 + X^2 + X^3 | X^2] = (t = t_0)(1 + 0.5 X^2 + 0.5^e[U^1 | X^2]) + μ_e + X^2 + ^e[U^1 + U^2 + X^3 | X^2] = (t = t_0)(1 + 0.5 X^2)_ψ_{2}(X^2, t) + μ_e + X^2 + ^e[U^2 + X^3 | X^2]_ν_{2}(X^2, e) since U^1 X^2. Thus, by Proposition <ref>, {2} is e-invariant . § ZERO-SHOT POLICY GENERALIZATION THROUGH E-INVARIANCE In this section, we consider zero-shot generalization (sometimes called unsupervised domain adaptation). We aim to find a policy that performs well (in terms of the expected outcome or reward) in a new test environment in which we have access to observations of the covariates but not the outcome. We formally lay out the setup and objective of zero-shot policy generalization and show that a policy that optimally uses information from e-invariant sets achieve desirable generalization properties. Zero-shot Assume Setting <ref> and that we are given n∈ℕ training observations D^ (Y^_i, X^_i, T^_i, π^_i, e^_i)_i=1^n from the observed environments e^_i ∈. During test time, we are given m ∈ℕ observations D^_X (X^_i)_i=1^m from a single test environment e^∈. We denote by Q^ Q^_1 ⊗…⊗ Q^_n, where Q^_i ^e_i,π_i_X, Y, T and Q^_X ^e^_X the distributions of D^ and D^_X, respectively. We seek to find a policy that generalizes well to the test environment e^. As we only have access to the observed covariate distribution ^e^_X and since there may be multiple potential test environments e∈ℰ with ^e_X=^e^_X, we propose to evaluate the performance of a policy π based on its expected outcome (relative to a fixed baseline policy π_t_0 that always chooses t_0) in the worst-case scenario across all environments with covariate distribution equal to ^e^_X. Formally, let [e^] {e ∈|^e_X = Q^_X} be an equivalence class of environments under which the covariate distribution ^e_X is the same as Q^_X. We then consider the following worst-case objective V^[e^](π) inf_e ∈ [e^]( ^e, π[Y] - ^e, π_t_0[Y] ). The goal of (population) zero-shot generalization applied to our setting is then to find a policy that (i) is identifiable from Q^_i (for an arbitrary 1≤ i ≤ n) and Q^_X and (ii) maximizes the worst-case performance defined in (<ref>). We now introduce a policy π^ that optimally uses information from e-invariant sets and show that π^ achieves the aforementioned goal under suitable assumptions. To this end, for all S ∈^_ (see Definition <ref>), we denote the set of all policies that depend only on X^S by Π^S {π∈Π|∃π̅: ^S →Δ() s.t. ∀ x ∈ , π(· | x) = π̅(· | x^S) }⊆Π. Next, for all S ∈^_, we define Π^S_⊆Π^S to be a set of policies such that each π^S ∈Π^S_ satisfies for all x∈ and t ∈ that π^S(t | x) > 0 t ∈_t^'∈1∑_e ∈τ^S_e(x^S, t^'). That is, all the mass of π^S(·|x) is distributed on treatments that maximize the treatment effect conditioned on X^S. Since ^_ contains only e-invariant sets , we also have that 1∑_e ∈τ^S_e ≡τ^S_f for any fixed f ∈ (but for finite samples, we approximate the former). Finally, we denote by Π^_{π∈Π|∃ S ∈^_ s.t. π∈Π^S_} the collection of all such policies. We now propose to use a policy from the collection of policies that are optimal among Π^_, i.e., _π∈Π^_^e^, π[Y]. Although the set (<ref>) depends on the expected value of Y in the test environment, in Proposition <ref> we show that we can construct a policy, denoted by π^, that satisfies the argmax property (<ref>) and is identifiable from the data available during training (i.e., i.i.d. observations from Q^ and Q^_X). In Theorem <ref>, we then prove generalization properties of an optimal e-invariant policy π^. This generalization result requires the following two assumptions. [Generalizing environments] It holds for all S ⊆{1,…,d} that S is e-invariant S is e-invariant ∪ [e^]. Assumption <ref> imposes some commonalities between environments which allows a transfer of e-invariance from the observed to the test environments. Similar assumptions are used when proving guarantees of other invariance-based learning methods (e.g., <cit.>). [Adversarial environment] There exist e∈[e^] and S ∈^_ such that for all x ∈ it holds that max_t ∈τ_e(x, t) = max_t ∈τ_e^S(x^S, t). Assumption <ref> ensures that there exists at least one environment that does not benefit from non-e-invariant covariates and facilitates the worst-case optimality result of our proposed optimal e-invariant policy π^. Without Assumption <ref>, relying only on e-invariant covariates can become suboptimal if other (non-e-invariant) covariates are beneficial across all environments. For example, consider Example <ref> and assume that the coefficients γ^1_e and γ^3_e in different environments are relatively close, e.g., ∀ e ∈: γ^1_e, γ^3_e ∈ (0.9, 1). In this scenario, {X^1, X^3} is not e-invariant. Still, it is preferable to use these variables for policy learning as they provide valuable information for predicting U^1, which modifies the treatment effect. In the above setting, Assumption <ref> does not hold; it would be satisfied if there is at least one additional environment e ∈ [e^] where γ^1_e=γ^3_e=0. The reason is that in such an environment the variables X^1 and X^3 do not offer any relevant information for predicting U^1. A similar assumption, known as confounding-removing interventions, is introduced in <cit.> in the prediction setting. Assume Setting <ref> and Assumptions <ref> and <ref>. Let e ∈ be an arbitrary training environment, for all S ∈^_ let π^S ∈Π^S_, that is, a policy that satisfies (<ref>), and let S^* be a subset such that S^* ∈ A _S ∈^_^e^[ ∑_t∈τ^S_e(X^S,t) π^S(t | X) ]. Define π^π^S^*. Then, the following holds: (i) the set A is identifiable from the distributions Q^_i (for an arbitrary 1≤ i ≤ n) and Q^_X (which makes it possible to choose S^* and π^S^* during test time) and (ii) π^ is an element in (<ref>). See Appendix <ref>. Assume Setting <ref> and Assumptions <ref> and <ref>. Let π^ be as defined in Proposition <ref>. Then, the two following statements hold. * Let π_t, as defined in Section <ref>, be the policy that always chooses treatment t ∈. We have that max_t ∈^e^, π_t[Y] ≤^e^,π^[Y]. * Given Assumption <ref>, we have that ∀π∈Π: V^[e^](π^) ≥ V^[e^](π). See Appendix <ref>. Theorem <ref> provides two generalization properties of the policy π^. First, Theorem <ref> shows that π^ guarantees to outperform, in any (unseen) test environment, an optimal policy that does not use covariates X. In other words, it is always beneficial to utilize the information from e-invariant sets when generalizing treatment regimes, compared to ignoring the covariates. Second, Theorem <ref> shows that π^ maximizes the worst-case performance defined in (<ref>), that is, it outperforms all other policies when evaluating each policy in the respective worst case environment if Assumption <ref> holds true. §.§ Estimation of π^ As shown in Proposition <ref>, the policy π^ is identifiable from Q^ and Q^_X. We now turn to the problem of estimating π^ given data D^ and D^_X of Q^ and Q^_X, respectively. For now, assume we are given the collection ^_ of all e-invariant sets . We discuss how to estimate ^_ in Section <ref>. Proposition <ref> suggests a plug-in estimator of π^ based on (<ref>). Specifically, the estimate can be obtained as follows. * For all S ∈^_, compute an estimate τ̂^S for τ^S_e, e∈, by pooling the data from the training environments (as the τ^S_e's are equal across environments by effect-invariance). There is a rich literature on estimating CATE from observational data (see <cit.> for a survey), one can choose an estimator that is appropriate to a given dataset. Finally, once an estimate τ̂^S is obtained, we then plug τ̂^S into (<ref>) to construct an estimate π̂^S for π^S, that is, π̂^S satisfies for all x∈ and t ∈ that π̂^S(t | x) > 0 t ∈_t^'∈τ̂^S(x^S, t^'). We distribute the probabilities equally if there are more than one t satisfying (<ref>). * Find an optimal subset S^* among ^_, see (<ref>): Ŝ^* ∈_S ∈^_1/m∑_i=1^m[∑_t∈τ̂^S((X^_i)^S,t) π̂^S(t | X^_i)]. If there are multiple S satisfying (<ref>), we randomly choose Ŝ^* among one of them. * Return π̂^Ŝ^*– which was already computed in step (1) – as the estimate of π^. § INFERRING E-INVARIANT SETS We now turn to the problem of testing the e-invariance condition (<ref>) based on training observations D^ (Y_i, X_i, T_i, π_i, e_i)_i=1^n from the observed environments e_i ∈. Throughout this section, we assume a fixed initial (or training) policy π^, i.e., ∀ i ∈{1,…,n}: π^ = π_i. The initial policy π^ can either be given or estimated from the available data (see, e.g., Algorithm <ref>). Our proposed testing methods remain valid even if the initial policies (π_i)_i=1^n are different as long as they are both known and independent of all observed quantities[Specifically, we do not allow for data collected with adaptive algorithms, which we leave for future work, see Section <ref>.]. Furthermore, we consider discrete environments, = {1,…,ℓ}, and consider a binary treatment variable, = {0, 1}. One can generalize to a multi-level treatment variable by repeating the proposed procedures for each level 1, …, k with the baseline treatment t_0 = 0 and combining the test results with a multiple testing correction method. To begin with, we define for all S ⊆{1,…,d} the e-invariance null hypothesis H^_0, S : S is e-invariant , see Definition <ref>. In Section <ref>, we propose a testing procedure under the assumption that, for all S ∈^_, the functions (τ^S_e)_e∈ can be modelled by linear functions and provide its statistical guarantees. In Section <ref>, we relax the linearity assumption by using a doubly robust pseudo-outcome learner <cit.>. §.§ Linear CATE functions One way of creating e-invariance tests is to assume a parametric form of the CATEs. In this section, we rely on the following linearity assumption. [Linear CATEs] For all S ∈^_, there exist coefficients (γ^S_t )_t ∈∈^k ×S and intercepts (μ^S_t)_t ∈∈^k such that ∀ e ∈, ∀ t ∈, ∀ x ∈^S: τ^S_e(x, t) = μ^S_t + γ^S_t x. Under Assumption <ref>, we now present a testing method for the e-invariance hypothesis H^_0, S for a fixed set S ⊆{1,…,d}. Let u_e ∈{0, 1}^1×ℓ and v_t ∈{0, 1} be the one-hot encodings of the environment e ∈ and the treatment t ∈, respectively, and let α∈^1 × (1 + d), A ∈^ℓ× (1 + d), β∈^1 × (1 + S) and B ∈^ℓ× (1 + S) be model parameters. For notational convenience, we define X̃_i 1 X_i^⊤∈^(1+d)×1 and X̃^S_i 1 X^S_i^⊤∈^(1+S)×1. We consider the following (potentially misspecified) response model under treatment t ∈ and environment e ∈ αX̃ + (u_e A) X̃_main effect + (v_t β) X̃^S_treatment effect + (v_t u_e B) X̃^S_environment × treatment effect. In this model, we have that the CATE functions τ^S_e are identical across environments e ∈ if and only if B = 0. Thus, testing (<ref>) is equivalent to testing the null hypothesis H_0: B = 0. The model proposed in (<ref>) is more restrictive than Assumption <ref> as it additionally requires the main effect to be linear. To avoid this requirement, we propose using a testing methodology that explicitly allows for the misspecification in the main effect, where we employ the centered and weighted estimation method proposed by <cit.>, which uses a Neyman orthogonal score <cit.>. (A standard approach of weighted least-squares using weights 1/π^(T_i|X_i) may not yield a test with the correct asymptotic level for the null hypothesis H_0.) More precisely, we consider the following steps: * Treatment centering: We center the treatment indicators v_T_i by an arbitrary fixed policy π̃ that depends only on X^S (i.e., π̃∈Π^S). More precisely, we replace v_T_i with v_T_i - π̃(1|X_i^S). As an example, one could consider a fixed random policy π̃(t|x) q^t(1-q)^(1-t) for some q ∈ [0, 1]. * Weighted least squares: We estimate the model parameters via a weighted least-squares approach. The weights are defined by W_i π̃(T_i|X_i^S)/π^(T_i| X_i), where π̃ is the policy chosen in step (i) and π^ is the initial policy. The use of the above steps ensures that the estimator for treatment effects remains consistent even if the main effect is misspecified <cit.> and allows us to obtain a test with pointwise asymptotic level, see Proposition <ref>. Formally, we employ a generalized method of moments estimator. Define ζ_i(α, A, β, B) αX̃_i + (u_e_i A) X̃_i + (v_T_i - π̃(1|X_i^S))( βX̃^S_i + (u_e_i B) X̃^S_i) and ∇ζ_i ζ_iα ζ_iA ζ_iβ ζ_iB^⊤. We then estimate α̂, Â, β̂, B̂ as the solutions to the estimating equations ∑_i=1^n G_i(α, A, β, B)=0, where G_i(α, A, β, B) W_i[Y_i - ζ_i(α, A, β, B)]∇ζ_i. Under additional regularity conditions (see Appendix <ref>), we have, for a vectorized B, that √(n)(B̂ - B) 𝒩(0, [B]). This allows us to construct a hypothesis test for H_0: B=0. To this end, we estimate [B] as follows. First, for all i∈{1,…,n} define G_i G_i(α̂, Â, β̂, B̂)∈^s+q and J_i J_i(α̂, Â, β̂, B̂)∈^(s+q)× (s+q), where J_i is the Jacobian of G_i and s (1+d+S)+ℓ (1 + d) and q ℓ (1 + S). Then, the covariance matrix [B] can be consistently estimated by the lower block diagonal q × q entry of the matrix (1/n∑_i=1^n J_i)^-1(1/n∑_i=1^n G_iG_i^⊤) (1/n∑_i=1^n J_i)^-⊤, <cit.>. Let us denote the covariance estimator as . We can then use the Wald test to test the null hypothesis H_0: B = 0 using the consistent estimator of [B] (see, e.g., <cit.>). When both π̃ and π^ are given, the covariance estimate can be obtained using standard implementations (e.g., Huber-White covariance estimator <cit.>). However, when either π̃ or π^ are estimated, one needs to adjust the covariance estimator to incorporate the additional estimation error (see Supplement C in <cit.>). The full testing procedure is given in Algorithm <ref>. [Wald e-invariance test] Given a training sample D^ of size n, a subset S ⊆{1,…,d} and a significance level α∈ (0, 1). * Solve the estimating equation (<ref>), and compute the test statistic T_n nB̂^⊤B̂. * Return ψ^Wd_n(D^, S, α) := (T_n > q_α), where q_α is the (1 - α)-quantile of a chi-squared distribution with ℓ(1+S)-degrees of freedom. Proposition <ref> shows that the above results carry over to our setting in that the proposed procedure achieves pointwise asymptotic level for testing the e-invariance hypothesis H^_0, S. Assume Setting <ref> and Assumption <ref>. Let S ⊆{1,…, d} be a subset of interest, α∈ (0,1) be a significance level, and ψ^Wd_n(D^, S, α) be the Wald invariance test detailed in Algorithm <ref>. Under some regularity conditions (see Appendix <ref>), it holds that ψ^Wd_n(D^, S, α) has pointwise asymptotic level for testing the e-invariance hypothesis H^_0, S, that is, sup_∈ H^_0, Slim sup_n →∞(ψ^Wd_n(D^, S, α) = 1) ≤α. The proof follows directly from <cit.>, see Appendix <ref>. §.§ Non-linear CATE functions This section relaxes the assumption of linear CATEs (Assumption <ref>) and proposes a non-parametric approach for testing the e-invariance hypothesis H^_0, S. The key idea is to employ a pseudo-outcome approach to estimate non-linear CATE functions (see (<ref>)) and apply a conditional mean independence test based on the pseudo-outcome. In particular, we consider the Doubly Robust (DR) learner due to <cit.>. For all e ∈, let μ̅^e: ×→ denote a model of the conditional expected outcome ^e[Y | X = ·, T=·] and π̅ denote a model of the initial policy π^. Assume t_0 = 0. We consider, for all e ∈, x ∈, t ∈ and y ∈, the function O^e(x, t, y) = μ̅^e(x, 1) - μ̅^e(x, 0) + (t = 1)(y - μ̅^e(x, 1))/π̅(1 | x) - (t = 0)(y - μ̅^e(x, 0))/1 - π̅(1 | x), and generate pseudo-outcomes by plugging in the observed data. The motivation for constructing the above pseudo-outcome is that, under Setting <ref>, the conditional mean of O^e(X, T , Y) given X^S is equal to the CATE function τ^S_e if at least one of the models μ̅^e or π̅ is correct. Formally, we have the following result. Assume Setting <ref>. Let S ⊆{1,…, d}, e ∈, π∈Π and O^e(·) be the pseudo-outcome defined in (<ref>). Assume t_0 = 0. If for all x ∈ and t ∈ μ̅^e(x, t) = ^e, π^[Y | X=x, T=t] or for all x ∈ and t ∈ π̅(t | x) = π^(t | x), we have for all x ∈^S that ^e, π^[O^e(X, T, Y) | X^S=x] = τ^S_e(x,1). See Appendix <ref>. Under the assumptions of Proposition <ref>, it holds for all S⊆{1,…,p} that the null hypothesis H^_0, S is equal to ∀ e_1, e_2 ∈, ∀ x ∈^S: ^e_1, π^[O^e_1(X, T, Y) | X^S = x] = ^e_2, π^[O^e_2(X, T, Y) | X^S = x]. We can thus test for e-invariance by using an appropriate conditional mean independence test that has a correct level under the null hypothesis (<ref>). For example, one can use the generalised covariance measure[The generalised covariance measure (GCM) does not directly test for the conditional mean independence. However, it preserves the level guarantees under the conditional mean independence null hypothesis. Specifically, consider a random vector (A,B,C). It holds that [A | B, C] = [A | B] (A, B | C) = 0 [(A, B | C)] = 0, where the first equality is the conditional mean independence hypothesis and the last equality is the null hypothesis of the GCM test. ] <cit.> or the projected covariance measure <cit.>. We therefore propose the following steps to construct a non-parametric test for the e-invariance hypothesis H^_0, S. [DR-learner e-invariance test] Given a training sample D^ of size n, subset of interest S, significance level α and conditional mean independence test ϕ. Let D_1 ⊂ D^ denote a random sample of D^, and D_2 D^∖ D_1. * Fit models μ̅^e and π̅ from the data D_1. * Construct the pseudo-outcomes O_i^e_i(X_i, T_i, Y_i) = μ̅^e_i_1(X_i) - μ̅^e_i_0(X_i) + T_i Y_i - μ̅^e_1_1(X_i)/π̅(1 | X_i) - (1 - T_i) Y_i - μ̅^e_i_0(X_i)/1 - π̅(1 | X_i). for each observation (X_i,T_i,Y_i)∈ D_2 in D_2. * Apply the test ϕ on O_i^e_i(X_i, T_i, Y_i) and observations in D_2 with a significance level α and return the test result. § EXTENSION: FEW-SHOT POLICY GENERALIZATION THROUGH E-INVARIANCE In the <ref> setting, the outcome is not observed in the test environment and, as shown in Theorem <ref>, relying on e-invariant covariates is optimal under certain assumptions. This is no longer true if, in the test environment, we have access to observations not only of the covariates but also of the corresponding outcomes obtained after using a test policy in the test environment. We may then want to adapt to the test environment while exploiting the e-invariance information gathered in the training environments. In this section, we illustrate how our method could be extended to such a setup (called few-shot generalization), where we observe a large number of training observations from the training environments and a small number of test observations (including the outcome) from the test environment. Few-shot Assume Setting <ref> and that we are given n∈ℕ training observations D^ (Y^_i, X^_i, T^_i, π^_i, e^_i)_i=1^n from the observed environments e^_i ∈ and m∈ℕ test observations D^ (Y^_i, X^_i, T^_i, π^_i)_i=1^m from a test environment e^∈ and assume that m ≪ n. The goal of few-shot policy generalization is to find a policy π∈Π that maximizes the expected outcome in the test environment e^ by exploiting the common information shared between the training and test environments. We consider using Assumption <ref> as the commonalities shared between the environments. In what follows, we propose a constrained optimization approach to learn a policy that aims to maximize the expected outcome in the test environment while exploiting the e-invariance condition. An optimal policy π_^* in the test environment e^ distributes all its mass on treatments which maximize the CATE in the test environment – conditioned on the covariates X. That is, an optimal policy π_^* satisfies for all x∈ and t ∈ that π_^*(t|x) > 0 t ∈_t^'∈τ_e^(x, t^'). Therefore, learning an optimal policy π_^* can be reduced to learning the CATE function τ_e^ in the test environment. As mentioned in Section <ref>, the problem of learning τ_e^ from observational data is a well-studied problem. Here, we abstract away from a specific method and assume that we are given a function class ℋ⊆{τ|τ:×→} and a loss function ℓ: ×××Π×→ such that τ̂∈_τ∈∑_i=1^m ℓ(Y^_i, X^_i, T^_i, π^_i, τ) is a consistent estimator of τ_e^ as m →∞. Now, we propose to leverage Assumption <ref> when estimating τ_e^ in the test environment. In particular, by Assumption <ref> we have for all S ∈^_ and for any fixed e ∈ that ∀ x ∈^S, ∀ t ∈: τ^S_e(x, t) = ^e^[τ_e^(X,t) | X^S = x]. Let S ∈^_ and for all x ∈^S, t ∈ and τ∈, we define h^S(τ, x,t) ^e^[τ(X,t) | X^S =x] and τ^S_(x, t) τ_e^S(x,t) (for an arbitrary e ∈). We then consider the following constrained optimization τ̂^S ∈ _τ∑_i=1^m ℓ(Y^_i, X^_i, T^_i, π^_i, τ) s.t. τ∈ and τ^S_(·, ·) ≡ h^S(τ, ·, ·). If there are multiple S ∈^_ satisfying e-invariance, that is, |^_|>1, one may choose an optimal set S^* as in (<ref>). We now impose the following separability assumption on the CATE function τ_e^, which allows us to find a solution to the optimization problem (<ref>). [Separability of CATEs] Let e^∈ be a test environment, S ∈^_ and N{1,…,d}∖ S. There exist function classes ⊆{^S ×→} and ⊆{^N ×→} and a pair of functions f ∈ and g ∈ such that ∀ x ∈, ∀ t ∈: τ_e^(x, t) = f(x^S,t) + g(x^N, t). Under Assumptions <ref> and <ref>, there exists f ∈ and g ∈ such that for all x ∈^S and t ∈ τ^S_(x, t) = ^e^[τ_e^(X,t)| X^S=x] = ^e^[f(X^S,t) + g(X^N, t) | X^S = x] =f(x,t) + ^e^[g(X^N, t) | X^S =x], which is equivalent to f(x,t) = τ^S_(x, t) - ^e^[g(X^N, t) | X^S =x]. Combining (<ref>) and (<ref>), we then have for all x ∈ and t ∈ that τ_e^(x, t) = τ^S_(x^S, t) - ^e^[g(X^N, t) | X^S =x^S] + g(x^N, t). Instead of optimizing over the function class , we now optimize over the function class by replacing τ in (<ref>) with τ_g^S: (x, t) ↦τ^S_(x^S, t) - ^e^[g(X^N, t) | X^S =x^S] + g(x^N, t). More specifically, we consider the unconstrained optimization ĝ∈_g ∈∑_i=1^m ℓ(Y^_i, X^_i, T^_i, π^_i, τ_g^S). Then, τ^S_ĝ is a solution to the constraint optimization (<ref>). In practice, we estimate the conditional expectation ^e^[g(X^N, t) | X^S = ·] by an estimator q̂_g,t. Intuitively, if the function class (see Assumption <ref>) has a lower complexity compared to , and q̂_g,t has good finite-sample properties, one may expect an improvement (e.g., τ^S_ĝ has a lower variance) using this approach over an estimator that does not take into account the training sample. Without additional assumptions on , the optimization problem (<ref>) requires the computation of q̂_g,t at each iteration (since q̂_g,t depends on g). In Appendix <ref>, we present an example to demonstrate that the optimization can simplify when imposing an additional assumption, such as linearity. § EXPERIMENTS This section presents the empirical experiments conducted on both simulated and real-world datasets. Firstly, we demonstrate through simulations that the testing methods introduced in Section <ref> provide level guarantees that hold empirically in finite samples. Secondly, we demonstrate the effectiveness of our e-invariance approach in a semi-real-world case study of mobile health interventions, where it outperforms the baselines in terms of generalization to a new environment. §.§ Testing for e-invariance (simulated data) We now conduct simulated experiments to validate the e-invariance tests proposed in Section <ref>. We generate datasets of size n ∈{1000, 2000, 4000, 8000} according to the SCM in Example <ref> with two training environments = {0,1}. Each of the noise variables (ϵ_U^1, ϵ_U^2, ϵ_X^1, ϵ_X^2, ϵ_X^3, ϵ_T, ϵ_Y) is independently drawn from a standard Gaussian distribution. The environment-specific parameters (γ^1_e, γ^2_e, γ^3_e, μ_e) are drawn independently from a uniform distribution on [-3, 3]. As for the initial policy, we consider a policy that depends on the full covariate set {X_1, X_2, X_3}. More precisely, for all x ∈, the initial policy π^ selects a treatment according to π^(T = 1 | X = x) = 1/(1 + e^-(0.5 + x^1 - 0.5 x^2 + 0.3 x^3)). Moreover, we explore a scenario where the assumption of linear main effects in Equation (<ref>) is violated. Specifically, we modify the structural assignment of Y in Example <ref> as Y μ_e + U + X^2 - 0.5 X^2 X^3 + X^3 + T (1 + 0.5X^2 + 0.5 U) + ϵ_Y. Lastly, we also consider a setting where the treatment effect itself is nonlinear. In this case, the structural assignment for Y is defined as Y μ_e + U + X^2 - 0.5 X^2 X^3 + X^3 + T (1 + 0.5(X^2)^2 + 0.5(X^2)^3 + 0.5 U) + ϵ_Y. We then conduct the Wald and DR-learner e-invariance tests (Wald test and DR test for short, respectively) for all candidate subsets according to Algorithm <ref> and Algorithm <ref>, where we assume that the initial policy π^ is given. For the DR test, we estimate the conditional mean function (μ̅^e) with a random forest <cit.> and use the weighted generalised covariance measure <cit.> as the final test ϕ in Algorithm <ref>. Figure <ref> reports the rejection rates at the 5% significance level for each candidate set under various settings. Recall that in Example <ref>, {X^2} is the only e-invariant set. The results indicate that, for finite sample sizes, both of the proposed methods hold the correct level at 5% in all settings (the rejection rates for the e-invariant set {X^2} are approximately 5% in all settings) except in the bottom left setting: here, the linear CATEs assumption (Assumption <ref>) is violated and the Wald test fails to maintain the correct level. When the linear main effect and treatment effect assumptions in (<ref>) are specified correctly (top row), the Wald test shows superior performance compared to the DR test (that is, the Wald test rejects the non-e-invariant sets more often). When the linear main effect assumption is violated (middle row), the Wald test remains valid but the power of the test drops significantly. The Wald test, nonetheless, slightly performs better than the DR test in terms of test power in this setting. §.§ A case study using HeartSteps V1 dataset We apply our proposed approach to the study of a mobile health intervention for promoting physical activity called HeartSteps V1 <cit.>. HeartSteps V1 was a 42-day micro-randomized trial with 37 adults that aimed to optimize the effectiveness of two intervention components for promoting physical activity. One of the interventions was contextual-aware activity suggestions, delivered as push notifications, which aimed to encourage short bouts of walking throughout the day. Each participant was equipped with a wearable tracker that linked to the mobile application, which gathered sensor data and contextual information about the user. This information was used to tailor the content of activity suggestions that users received and to determine whether the user was available to receive an activity suggestion (e.g., if the sensor data indicated that the user was currently walking, they would not be sent a suggestion). The application randomized the delivery of activity suggestion up to five times a day at user-selected times spaced approximately 2.5 hours apart. If the contextual information indicated that the person was unavailable for the intervention, no suggestion was sent. In this paper, we consider users as environments. We filter out users that had zero interactions with the application, resulting in a total of 27 users. For each user u ∈{1,…,27}, we have the user's trajectory (X_u, 1, T_u, 1, Y_u, 1),…,(X_u, ℓ_u, T_u, ℓ_u, Y_u, ℓ_u) of size ℓ_u (on average ℓ_u is 160), where the covariates X_u,i are the contextual information about the user at time step i, the treatment T_u,i∈{0,1} is whether to deliver an activity suggestion, and the outcome Y_u,i is the log transformation of the 30-minute step count after the decision time. In this analysis, we make Assumption <ref> and consider the following approximation for the conditional mean of Y_u,i: α_u^⊤ g(X_u,i) + β_u^⊤ f(X_u,i) T_u,i, where g(X_u,i) is a (known) baseline feature vector and f(X_u,i) is a (known) feature vector for the treatment effect. We allow the main effect to be misspecified. As for the feature vectors, we consider the same features (with minor modifications[We replace the dosage variable with the bucketized decision time variable to account for potential non-linear time dependency.]) as in <cit.>; the vector f(X_u,i) contains Decision Bucket (DB) (bucketized decision time), Application Engagement (AE) (indicating how frequently users interact with the application), Location (LC) (indicating whether users are at home, at work or somewhere else) and Variation Indicator (VI) (the variation level of step count 60 minutes around the current time slot in past 7 days). The baseline vector g(X_u,i) contains f(X_u,i) along with the prior 30-minute step count, the previous day's total step count and the current temperature. Since, for a given user, the outcome model (<ref>) does not change over time, we can combine all users' trajectories and obtain the combined observations under multiple environments; that is, we have the dataset D (X_i, T_i, Y_i, e_i)_i = 1^n, with n = ∑_u ∈{1,…,27}ℓ_u, collected from multiple environments (users) , where e_i ∈ for all i ∈{1,…,n}. In particular, we do not account for potential temporal dependencies that are not captured by the bucketized decision times. In practice, one may allow for dependence across time in the observations (X_u, i, T_u, i, Y_u, i)_i=1^ℓ_u within each user, which we leave for future work, see Section <ref>. §.§ Inferring e-invariant sets (HeartSteps V1) We begin our analysis on the HeartSteps V1 data by conducting the Wald e-invariance test detailed in Algorithm <ref> to find subsets of the treatment effect feature vector f(X) that satisfy the e-invariant condition (<ref>). As a comparison, we also apply the invariance test proposed in <cit.>, which tests for a full-invariance instead of our proposed e-invariance (see Figure <ref>). Figure <ref>(Left) reports the p-values of all subsets for the full-invariance and e-invariance tests. The p-values for the full-invariance are all below the 5% level and hence there is no subset that satisfies the full-invariance hypothesis. However, we find several subsets that satisfy the e-invariance condition (those with p-values of the e-invariance hypothesis greater than the 5% level). Interestingly, all subsets that contain Application Engagement (AE) have p-values close to zero, suggesting that AE is a variable that renders the conditional treatment effect unstable between environments if included in the model. We report all the subsets for which we accept the e-invariance hypothesis at the 5% significance level in Figure <ref>(Right). The above finding demonstrates that the relaxed notion of invariance that we propose can be beneficial in practice. The full-invariance condition may be too strict in that there is no full-invariant set. But if our goal is to learn a generalizable policy, it may suffice to test for the weaker notion of e-invariance, which the following section investigates using semi-real data. §.§ Zero-shot generalization (augmented HeartSteps V1) As the HeartSteps V1 study has been completed, it is not possible to implement and test a proposed policy on a new subject. In this section, we instead conduct a simulation study using HeartSteps V1 data to illustrate the use of e-invariance for zero-shot generalization, see Section <ref>. To evaluate the performance of a policy, we consider `leave-one-environment-out' cross validation. Specifically, we first choose e ∈ as a test environment (user) and split the dataset D into the test set D^{(X^_i, T^_i, Y^_i, e^_i)}_i=1^n_ and the training set D^{(X^_i, T^_i, Y^_i, e^_i)}_i=1^n_, where e^_i = e and e^_i ∈∖{e} for all i. We then conduct the training and testing procedure as follows. Training phase: Using the training data D^, we find all sets that are not rejected by the Wald e-invariance test detailed in Algorithm <ref>. Using the inferred e-invariant sets, we then compute an estimate of π^ as discussed in Section <ref>, where we use the R-learner due to <cit.> as the CATE estimator – based on the implementation of the Python package <cit.>. As a baseline, we include an optimal policy which utilizes all variables in f(X) (denoted as `full-set'). This baseline is computed by pooling all data from the training environments and fitting the R-learner CATE estimator on the complete covariate set. Additionally, we include a uniformly random policy denoted as `random' as another baseline for comparison. To illustrate this procedure for e=1, i.e., consider the set of training users = {2,…,27}. Using the observations from , we apply Algorithm <ref> to obtain the inferred e-invariant sets ^_ = {{DB}, {DB, VI}, {VI}, {DB, LC, VI}}. For each S ∈^_, we then train a policy π̂^S as in (<ref>) using the R-learner as the CATE estimator and choose an optimal Ŝ^* as in (<ref>). We then use π^Ŝ^* as the final estimate of π^. Testing phase: To perform policy evaluation, we create a semi-real test environment. To do so, we follow <cit.>. Given a test dataset D^, the value V(π) of a policy π∈Π is computed by the following procedure. (1) Fit a regression model (<ref>) on D^ Y^_i = α_^⊤ g(X^_i) + β_^⊤ f(X^_i) T^_i + ϵ_i, and obtain pairs of covariates and residuals {(X^_i, ϵ̂_i)}_i=1^n_ and parameters α̂_ and β̂_. (2) Generate more pairs to obtain a total of 1000 observations {(X̅^_i, ϵ̅_i)}_i=1^1000 by uniformly sampling with replacement from the orignal pairs. (3) For each i, the treatment T̅^_i is selected based on the covariates X̅^_i according to π. (4) For each i, the reward Y̅^_i is defined by Y̅^_i = α̂_^⊤ g(X̅_i^) + β̂_^⊤ f(X̅_i^) T̅_i + ϵ̅_i, where the coefficients α̂_ and β̂_ are obtained from the regression model fitted in step (1). The value is then given as the average reward: V̂(π) = 1/1000∑_i=1^1000Y̅^_i. The performance of a policy π is then computed as V̂(π) - V̂(π_0), where π_0 is the policy that always selects to not deliver a suggestion. This corresponds to an empirical version of the expected relative reward as in (<ref>). Figure <ref>(Left) shows the performance of different policies trained on the data available during training. Our proposed approach (e-inv) shows a slight improvement over the baseline approaches in terms of the mean and median performances over all users. Furthermore, as presented in Figure <ref>(Right), the e-invariance policy π^ yields higher relative reward comparing to the policy that uses all the variables in f(X) in the majority of users (17 out of 27 users). We use the Wilcoxon signed-rank test <cit.> to compare the performance of the proposed e-inv policy with that of the full-set policy. It shows a p-value of 0.008, indicating that the improvement is statistically significant. § CONCLUSION AND FUTURE WORK This work addresses the challenge of adjusting for distribution shifts between environments in the context of policy learning. We propose an approach that leverages e-invariance, which is a relaxation of the full invariance assumption commonly used in causal inference literature. We show that despite being a weaker assumption, e-invariance is sufficient for building policies that generalize better to unseen environments compared to other policies. That is, under suitable assumptions, an optimal e-invariance policy is worst-case optimal. Additionally, we present a method for leveraging e-invariance information in the few-shot generalization setting, when a sample from the test environment is available. To enable the practical use of e-invariance, we propose two testing procedures; one to test for e-invariance in linear and one in nonlinear model classes. Moreover, we validate the effectiveness of our policy learning methods through a semi-real-world case study in the domain of mobile health interventions. Our experiments show that an optimal policy based on an e-invariant set outperforms policies that rely on the complete context information when it comes to generalizing to new environments. There are several promising directions for future research. It might be worthwhile to develop e-invariance testing procedures that can handle more complex temporal dependencies, especially when the data is collected from adaptive algorithms such as contextual bandit algorithms. Existing works have proposed inference methods to handle such scenarios <cit.>, but how to incorporate these methods effectively into our framework remains an open question. Another interesting area of future work is how best to use the e-invariant set S^* (see (<ref>)) in order to warm-start a contextual bandit algorithm. In the digital health field, one frequently conducts a series of optimization trials (each on a set of different users) in the process of optimizing a full digital health intervention. The data from each trial is used to inform the design of the subsequent trial. In the case of HeartSteps, 3 trials (V1, V2 and V3) were conducted beginning with HeartSteps V1. HeartSteps V2 & V3 deployed a Bayesian Thompson-Sampling algorithm <cit.> which uses a prior distribution on the parameters to warm-start the algorithm. Clearly the knowledge of an optimal e-invariant set S^* should guide the formation of the prior. Determining the most effective approach to achieve this is still an open question. Lastly, our work also contributes to the field of causal inference by introducing a relaxation of the full invariance assumption. We believe that there are other scenarios where the full invariance assumption is too restrictive, and a relaxation of the assumption may be sufficient to address the task at hand. Further investigating the potential for relaxation in different causal inference settings would be a promising future research direction. § ACKNOWLEDGMENTS We thank Eura Shin for providing the code used to preprocess the HeartSteps V1 dataset. During part of this project SS and JP were supported by a research grant (18968) from VILLUM FONDEN. NP is supported by a research grant (0069071) from Novo Nordisk Fonden. SM's research is supported by the National Institutes of Health grants P50DA054039 and P41EB028242. PK is supported by the National Institutes of Health grants R01HL125440, U01CA229445 and R01LM013107. § PROOFS §.§ Proof of Proposition <ref> We split the proof into three parts. First (Part 1), we show that the expected outcome function can be decomposed into an effect-modification term that depends on the treatment and a main-effect term that does not depend on the treatment. We then proceed and prove the `only if' part of the main result in Part 2 and the `if' part in Part 3. Part 1: We show the following lemma. Assume Setting <ref>. Let S ⊆{1,…,d} be an arbitrary subset and t_0∈ be the baseline treatment. Then, there exists a pair of functions κ_S: ^S ××→ and ν_S: ^S ×→ such that ∀ e ∈, ∀ x ∈^S, ∀ t ∈: ^e, π_t[Y | X^S = x] = (t ≠ t_0)κ_S(x, t, e) + ν_S(x, e). Fix e ∈ and t ∈, and define δ_S(·, t, e) ^e,π_t[Y| X^S=·] and ν_S(·, e) ^e,π_t_0[Y| X^S=·]. It then holds for all x ∈^S that ^e,π_t[Y| X^S=x] = (t ≠ t_0)(δ_S(x, t, e) - ν_S(x, e)) + ν_S(x, e). We then define κ_S(·, t, e) δ_S(·, t, e) - ν_S(·, e), which concludes the proof. Part 2: Assume a subset S ⊆{1,…,d} is e-invariant . Fix e_0∈ as a reference environment. By Lemma <ref>, there exists a pair of functions κ_S: ^S ××→ and ν_S: ^S ×→ such that for all e∈, x∈^S and t∈ τ_e^S(x,t) =((t ≠ t_0)κ_S(x, t, e) + ν_S(x, e)) - ν_S(x, e) =(t ≠ t_0)κ_S(x, t, e). Next, we define the function ψ̃_S: ^S ×→ for all x∈^S and t∈ by ψ̃_S(x,t)κ_S(x, t, e_0). Now, since S is e-invariant it holds for all ∀ e ∈, x∈^S and t∈ that (t ≠ t_0)ψ̃_S(x,t) = (t ≠ t_0)κ_S(x, t, e). Then, combining (<ref>) and (<ref>) implies that (<ref>) is true. Part 3: Assume (<ref>) holds for a subset S ⊆{1,…,d}. It then holds for all e,h∈, x∈^S and t∈ that τ^S_e(x,t)-τ^S_h(x,t) =(ψ_S(x,t) + ν_S(x,e) - ψ_S(x,t_0) - ν_S(x,e))- (ψ_S(x,t) + ν_S(x,h) - ψ_S(x,t_0)- ν_S(x,h)) =0, which proves that S is e-invariant . §.§ Proof of Proposition <ref> From the SCM (<ref>), we have for all x ∈^_f,X that ^e, π_t[Y | X^_f,X=x] = ^e[f(X^_f,X, U^_f,U, t) | X^_f,X=x] + ^e[g_e(X, U, ϵ_Y) | X^_f,X=x] Using the assumption (i) that U^_f,U X^_f,X in ^e_X,U for all e ∈, we have^e, π_t[Y | X^_f,X=x] =^e[f(x, U^_f,U, t)] + ^e[g_e(X, U, ϵ_Y) | X^_f,X=x], where a formal proof for the equality (<ref>) is given, for example, in <cit.>. Next, using the assumption (ii) that ^e_U^_f,U are identical across e ∈, we can drop the dependency on e from the component ^e[f(x, U^_f,U, t)] in (<ref>) and have that ^e, π_t[Y | X^_f,X=x] =[f(x, U^_f,U, t)] + ^e[g_e(X, U, ϵ_Y) | X^_f,X=x]. Thus, by Proposition <ref>, _f,X is e-invariant . §.§ Proof of Proposition <ref> We begin with the proof of the first statement (i) of Proposition <ref>. First, we show that the collections of policies (Π^S_)_S ∈^_ are identifiable from Q^_i (for an arbitrary 1≤ i ≤ n). Fix (an arbitrary) f ∈ and S ∈^_. Let π^S ∈Π^S_. Then, π^S satisfies π^S(t | x) > 0 t ∈_t^'∈1∑_e ∈τ^S_e(x^S, t^') = _t^'∈τ^S_f(x^S, t^'). Thus, the identifiability of Π^S_ depends on the identifiability of τ^S_f. Fix i ∈{1,…,n}. Recall that in Setting <ref> we assume ∀ x ∈, t ∈: π_i(t | x) > 0. It then holds for all x ∈^S and t ∈ that ^e_i, π_t[Y_i | X^S_i = x] = ^e_i[^e_i, π_t[Y_i | X_i] | X^S_i = x] (*)=^e_i[^e_i, π_i[(T_i=t)π_i(T_i | X_i) Y_i | X_i ] | X^S_i=x] =^e_i, π_i[(T_i=t)π_i(t | X_i) Y_i | X^S_i =x ], where the equality (*) holds by definition of π_t. Since the right-hand side of (<ref>) is the expectation Q^_i, the quantity ^e_i, π_t[Y_i | X^S_i = x] is identifiable from Q^_i. Next, we have ^e_i, π_t[Y_i | X^S_i = x] - ^e_i, π_0[Y_i | X^S_i = x] = ^e_i, π_t[Y | X^S = x] - ^e_i, π_0[Y | X^S = x] = τ^S_e_i(x, t) = τ^S_f(x, t) since S ∈^_. From (<ref>), we then have that τ^S_f is identifiable from Q^_i and therefore Π^S_ is identifiable from Q^_i. Consequently, the collection A _S ∈^_^e^[ ∑_t∈τ^S_e(X^S,t) π^S(t | X) ] is identifiable from Q^_i and Q^. Next, we show the proof of the second statement (ii) of Proposition <ref>. Fix e^∈, (an arbitrary) f ∈ and let S^* ⊆{1,…,d} be a subset that satisfies S^* ∈_S ∈^_^e^[∑_t∈τ^S_f(X^S,t) π^S(t | X^S)]. Next, we recall the defnition Π^_ = {π∈Π|∃ S ∈^_ s.t. π∈Π^S_} and let π^♢_e^∈_π∈Π^_^e^, π[Y]. Then, using that π^♢_e^∈Π^_, choose S^♢∈^_ such that π^♢_e^∈Π^S^♢_. We have ^e^, π^♢_e^[Y] - ^e^, π_t_0[Y] = ^e^[∑_t∈(^e^,π_t[Y | X^S^♢] - ^e^,π_t_0[Y | X^S^♢])π^♢_e^(t | X^S^♢)] = ^e^[∑_t∈τ^S^♢_e^(X^S^♢,t) π^♢_e^(t | X^S^♢)] = ^e^[∑_t∈τ^S^♢_f(X^S^♢,t) π^♢_e^(t | X^S^♢)] by Assumption <ref> = ^e^[∑_t∈τ^S^♢_f(X^S^♢,t) π^S^♢(t | X^S^♢)] since π^♢_e^∈Π^S^♢_ ≤max_S ∈^_^e^[∑_t∈τ^S_f(X^S,t) π^S(t | X^S)] = ^e^[∑_t∈τ^S^*_f(X^S^*,t) π^S^*(t | X^S^*)] = ^e^[∑_t∈τ^S^*_e^(X^S^*,t) π^S^*(t | X^S^*)] by Assumption <ref> = ^e^[∑_t∈(^e^,π_t[Y | X^S^*] - ^e^,π_t_0[Y | X^S^*])π^S^*(t | X^S^*)] = ^e^, π^S^*[Y] - ^e^, π_t_0[Y]. We therefore have that ^e^, π^♢_e^[Y] ≤^e^, π^S^*[Y], which concludes the proof of the second statement (ii) of Proposition <ref>. §.§ Proof of Theorem <ref> Let e^∈ be a test environment, π^ be a policy satisfying (<ref>) and, for all S ⊆{1,…,d}, Π^S_ be the set policies satisfying (<ref>). We now prove the first statement, see Theorem <ref>. By definition, there exists S^* ∈^_ such that π^∈Π^S^*_. It then holds that ^e^, π^[Y] - ^e^, π_t_0[Y] = ^e^[∑_t∈(^e^,π_t[Y | X^S^*] - ^e^,π_t_0[Y | X^S^*])π^(t | X^S^*)] = ^e^[∑_t∈τ^S^*_e^(X^S^*, t)π^(t | X^S^*)]. Fix e^∈. We have ^e^, π^[Y] - ^e^, π_t_0[Y] = ^e^[∑_t∈τ^S^*_e^(X^S^*, t)π^(t | X^S^*)] = ^e^[∑_t∈τ^S^*_e^(x^S^*, t) π^(t | X^S^*)] by Assumption <ref> = ^e^[max_t∈τ^S^*_e^(x^S^*, t)] by the definition of Π^S^*_ = ^e^[max_t∈τ^S^*_e^(X^S^*, t)] by Assumption <ref> ≥max_t ∈(^e^[τ^S^*_e^(X^S^*, t)]) = max_t ∈(^e^, π_t[Y] - ^e^,π_t_0[Y]) by the tower property This implies, ^e^, π^[Y] ≥max_t ∈^e^, π_t[Y], which concludes the proof of Theorem <ref>. Next, we prove the second statement, see Theorem <ref>. Recall that [e^] { e∈|^e_X = Q^_X }. From Assumption <ref>, there exists an environment f ∈ [e^] and ∈^_ such that ∀ x ∈: max_t ∈τ_f(x, t) = max_t ∈τ^_f(x^, t). We have for all S ∈^_ that ^f[max_t ∈τ^_f(X^, t)] = ^f[max_t ∈τ_f(X, t)] from (<ref>) = ^f[^f[max_t ∈τ_f(X ,t)| X^S]] ≥^f[max_t ∈^f[τ_f(X ,t) | X^S]] = ^f[max_t ∈τ^S_f(X^S ,t)]. Now, we have for all e ∈ [e^] and for all S ∈^_ ^e[max_t ∈τ^_e(X^, t)] = ^f[max_t ∈τ^_f(X^, t)] by Assumption <ref> and f ∈ [e^] ≥^f[max_t ∈τ^S_f(X^S ,t)] from (<ref>) = ^e[max_t ∈τ^S_e(X^S ,t)] by Assumption <ref> and f ∈ [e^]. Next, we recall the definition Π^_ = {π∈Π|∃ S ∈^_ s.t. π∈Π^S_} and that π^∈_π∈Π^_^e^, π[Y]. Then there exists S^* ∈^_ such that π^∈Π^S^*_. We therefore have for all e ∈ [e^] that ^e, π^[Y] - ^e, π_t_0[Y] = ^e[∑_t∈(^e,π_t[Y | X^S^*] - ^e,π_t_0[Y | X^S^*])π^(t | X^S^*)] = ^e[∑_t∈τ^S^*_e(X^S^*, t)π^(t | X^S^*)]. Fix e^∈, we have for all e ∈ [e^] ^e, π^[Y] - ^e, π_t_0[Y] = ^e[∑_t∈τ^S^*_e^(X^S^*, t)π^(t | X^S^*)] by (<ref>) and Assumption <ref> = ^e[max_t ∈τ^S^*_e^(X^S^*, t)] by the definition of Π^S^*_ = ^e[max_t ∈τ^S^*_e(X^S^*, t)] by Assumption <ref>. Let π^S^♢∈Π^S^♢_, we then have that ^e^[max_t ∈τ^S^*_e^(X^S^*, t)] = ^e^, π^[Y] - ^e^, π_t_0[Y] from (<ref>) ≥^e^, π^S^♢[Y] - ^e^, π_t_0[Y] = ^e^[∑_t∈(^e^,π_t[Y | X^S^♢] - ^e^,π_t_0[Y | X^S^♢])π^S^♢(t | X^S^♢)] = ^e^[∑_t∈τ^S^♢_e^(X^S^♢, t)π^S^♢(t | X^S^♢)] = ^e^[∑_t∈τ^S^♢_e^(X^S^♢, t)π^S^♢(t | X^S^♢)] by Assumption <ref> = ^e^[max_t ∈τ^S^♢_e^(X^S^♢, t)] by the definition of Π^S^♢_ = ^e^[max_t ∈τ^S^♢_e^(X^S^♢, t)], by Assumption <ref> where the above inequality holds because π^∈_π∈Π^_^e^, π[Y] = _π∈Π^_^e^, π[Y] - ^e^, π_t_0[Y]. Combining the two inequalities (<ref>) and (<ref>), we then have that ^e^[max_t ∈τ^S^*_e^(X^S^*, t)] = ^e^[max_t ∈τ^S^♢_e^(X^S^♢, t)]. Then, from (<ref>) and since e^∈ [e^] and S^♢ is e-invariant ∪ [e^] (by Assumption <ref>), we have for all e ∈ [e^] ^e[max_t ∈τ^S^*_e(X^S^*, t)] = ^e[max_t ∈τ^S^♢_e(X^S^♢, t)]. We are now ready to prove the main statement of Theorem <ref>. V^[e^](π^) = inf_e ∈ [e^](^e, π^[Y] - ^e, π_t_0[Y]) = inf_e ∈ [e^]^e[max_t∈τ^S^*_e(X^S^*, t)] from (<ref>).By the definition of [e^] and Assumption <ref>, we then have V^[e^](π^) = ^f[max_t∈τ^S^*_f(X^S^*, t)] = ^e^f[max_t ∈τ^S^♢_e^f(X^S^♢, t)] from (<ref>). Finally, we show that for all π∈Π, V^[e^](π) is bounded above by V^[e^](π^). V^[e^](π) = inf_e ∈ [e^] (^e^, π[Y] - ^e^, π_t_0[Y]) ≤^f, π[Y] - ^f, π_t_0[Y] since f ∈ [e^] = ^f[∑_t∈(^f,π_t[Y | X] - ^f,π_t_0[Y | X])π(t | X)] = ^f[∑_t∈τ_f(X, t)π(t | X)] ≤^f[max_t ∈τ_f(X, t)] = ^f[max_t ∈τ^S^♢_f(X^, t)] from (<ref>) = V^[e^](π^) from (<ref>), which concludes the proof. §.§ Proof of Proposition <ref> Let S ⊆{1,…,d}, α∈ (0,1), X̃1 X^⊤ and X̃^S 1 X^S^⊤ and B̂ be the estimator solving the equation ∑_i^n G_i(α, A, β,B) = 0. Assume Setting <ref> and assume the following regularity conditions (these are similar to the ones required by <cit.> with the difference that we require them to hold for all e ∈). [Regularity conditions] For all e ∈ it holds that (i) ^e, π^[Y^4]<∞ and max_j∈{1,…,d+1}^e[(X̃^j)^4]<∞, (ii) the matrices ^e[X̃^S(X̃^S)^⊤] and ^e[∑_t ∈π̃(t |X^S)[ X̃; u_eX̃; v_t - π̃(1|X^S); (v_t - π̃(1|X^S))u_e ][ X̃; u_eX̃; v_t - π̃(1|X^S); (v_t - π̃(1|X^S))u_e ]^⊤] are invertible. From Proposition 3.1 in <cit.>, we have that √(n)(B̂ - B) 𝒩(0, [B]), where defined in (<ref>) is a consistent estimator of [B]. Next, from Theorem 8.3 in <cit.>, we have that T_n nB̂B̂χ^2_(⃗B). Let q_α be the (1 - α)-quantile of χ^2_(⃗B) and ψ^Wd_n(D^, S, α) := (T_n > q_α). We can then conclude that sup_∈ H^_0, Slim sup_n →∞(ψ^Wd_n(D^, S, α) = 1) ≤α. §.§ Proof of Proposition <ref> For all x ∈, t ∈, w ∈ and e ∈, let Z^e_w(x, t, y) μ̅^e(x, w) + (t = w)(y - μ̅^e(x, w))/π̅(w | x). We now show that for all S ⊆{1,…,d}, ^e, π^[Z^e_w(X, T, Y) | X^S] = ^e,π_w[Y | X^S] if one of the models is correct. * Assume that μ̅^e is correct, i.e., μ̅^e(x, t) = ^e[Y | X=x, T=t] for all x ∈, t ∈ and e ∈. We then have ^e,π^[(T = w)(Y - μ̅^e(X, w))/π̅(w | X)| X^S] = ^e[^e,π^[(T = w)(Y - μ̅^e(X, w))/π̅(w | X)| X ] | X^S ] =^e[π^(w | X)/π̅(w | X)^e[(Y - μ̅^e(X, w)) | X, T = w ] | X^S ] = 0. Next, we have ^e, π^[μ̅^e(X,w) | X^S] = ^e[^e[Y | X, T = w ]| X^S] = ^e[^e, π_w[Y | X ]| X^S] = ^e, π_w[Y | X^S]. Then, from (<ref>) and (<ref>), we have that ^e, π^[Z^e_w(X, T, Y) | X^S] = ^e,π_w[Y | X^S] and it thus holds for all x ∈^S that ^e,π^[O^e(X,T,Y) | X^S = x] = ^e, π^[Z^e_1(X, T, Y) - Z^e_0(X, T, Y) | X^S = x] = τ^S_e(x, 1). * Assume that π̅ is correct, i.e., π̅(t | x) = π^(t | x) for all x ∈, t ∈. We then have ^e,π^[(T = w)(Y - μ̅^e(X, w))/π̅(w | X)| X^S] = ^e[^e,π^[(T = w)(Y - μ̅^e(X, w))/π^(w | X)| X] | X^S ] = ^e[^e[(Y - μ̅^e(X, w)) | X, T = w] | X^S ] = ^e[^e,π_w[Y - μ̅^e(X, w) | X] | X^S ] = ^e,π_w[Y - μ̅^e(X, w) | X^S]. Next, we have ^e, π^[μ̅^e(X,w) | X^S] = ^e, π_w[μ̅^e(X,w) | X^S], since the expectation is only over X and does not depend on the treatment T. Then, from (<ref>) and (<ref>), we have that ^e, π^[Z^e_w(X, T, Y) | X^S] = ^e,π_w[Y | X^S] and it thus holds for all x ∈^S that ^e,π^[O^e(X,T,Y) | X^S = x] = ^e, π^[Z^e_1(X, T, Y) - Z^e_0(X, T, Y) | X^S = x] = τ^S_e(x, 1). § FEW-SHOT POLICY GENERALIZATION IN LINEAR MODELS [Few-shot policy generalization for linear CATE functions] Let S ∈^_ and N{1,…,d}∖ S and recall that = {1,…,k}. We assume that is a class of linear functions parameterized by Θ⊆^k × d, i.e., {τ|∃θ∈Θ s.t. ∀ t ∈: τ(·, t) ≡θ_t (·)}. By the linearity of , Assumption <ref> is satisfied, that is, there exists θ^S ∈Θ^S ⊆^k ×S and θ^N ∈Θ^N ⊆^k ×N such that ∀ x ∈, t ∈: τ_e^(x, t) = θ_t^S x^S + θ_t^N x^N. Under Assumption <ref>, we then have ∀ x ∈^S, t ∈: θ_t^S x = τ^S_(x, t) - θ_t^N ^e^[X^N | X^S = x], and hence ∀ x ∈, t ∈: τ_e^(x, t) = τ^S_(x^S, t) + θ_t^N (x^N - ^e^[X^N | X^S = x^S]). Next, let q̂ be an estimator of ^e^[X^N | X^S = ·] and, for all θ∈Θ^N, define τ_θ^S: (x ,t) ↦τ^S_(x^S, t) + θ_t(x^N - q̂(x^S)). Importantly, the estimand we are estimating by q̂ does not change with θ^N. We then consider the unconstrained optimization θ̂∈_θ∈Θ^N∑_i=1^m ℓ(Y^_i, X^_i, T^_i, π^_i, τ_θ^S). Here, by utilizing the e-invariance information τ^S_tr along with Assumption <ref>, we now optimize over the restricted function class Θ^N ⊂Θ.
http://arxiv.org/abs/2306.04728v1
20230607184456
Linear Holographic Hard Wall Model for Glueballs and the Pomeron
[ "Rafael A. Costa Silva", "Henrique Boschi-Filho" ]
hep-ph
[ "hep-ph", "hep-th" ]
[email protected] [email protected] [email protected] [email protected] Instituto de Física, Universidade Federal do Rio de Janeiro, 21.941-909 - Rio de Janeiro - RJ - Brazil In this work we propose a linear holographic hard wall model, modifying the original model introducing anomalous dimensions for the glueball operators so that we obtain a linear Regge trajectory. We match this Regge trajectory with that of the pomeron with precision better than 1%. The glueball masses obtained in this way are in good agreement with lattice data. Linear Holographic Hard Wall Model for Glueballs and the Pomeron Henrique Boschi-Filho July 31, 2023 ================================================================ § INTRODUCTION The rise of the proton-proton cross section with energy is related to the pomeron, a particle with no charges and quantum numbers of the vacuum, whose experimental Regge trajectory is the following <cit.> α(t=M^2) = 1.08 + 0.25 M^2 , with masses M expressed in GeV. The BFKL pomeron <cit.> is a perturbative treatment of the problem, although non-perturbative aspects are also present (see e. g. <cit.>). QCD describes strong interactions and at high energies its coupling is small so that it can be treated perturbatively. At low energies, the QCD coupling is large and non-perturbative methods are needed to tackle phenomena like confinement, phase transitions and hadronic spectra. This non-perturbative behavior usually requires heavy and involved numerical calculations known as lattice gauge theories. Alternatively, low energy QCD may be approached by other methods, as the solution of Schwinger-Dyson equations, QCD sum rules and effective models (for a review see, e. g., <cit.>). In particular, models inspired by the AdS/CFT correspondence <cit.> proved useful to describe mass spectra of different hadrons with various spins, as glueballs <cit.>, as well as for mesons and baryons, as for instance, in <cit.>. The holographic hard wall model (HW) <cit.> introduces a hard cutoff in the AdS space, described by Poincaré coordinates ds^2=R^2/z^2(dz^2+(dx⃗)^2-dt^2) such that the size of the AdS slice, 0≤ z≤ z_ max, can be related to a QCD scale, Λ_ QCD, as z_ max=1/Λ_ QCD. Then, the five dimensional AdS bulk fields are described by Bessel functions of order ν, J_ν(z). The discreteness of the hadronic spectra is assured by imposing boundary conditions at z=z_ max. This way, hadronic masses M are proportional to the zeros of Bessel functions, J_ν(z). This model was inspired by holographic descriptions of hard scattering of glueballs <cit.> and deep inelastic scattering of hadrons <cit.>, and it is very useful to obtain hadronic form factors, structure functions, parton distribution functions, etc (see, e. g., <cit.>). A well known drawback of the hard wall model is that it leads to non-linear Regge trajectories, L × m^2. It is important to mention that in the HW, the order of the Bessel function ν is related to the conformal dimension of the dual operator. For instance, for even spin L glueballs, it reads ν= 2 + L <cit.>. Actually, in Ref. <cit.>, an approximate linear Regge trajectory for light even glueballs was obtained and compared to a good approximation to that of the pomeron. A similar analysis within the HW was done for odd spin glueballs comparing their Regge trajectories with the odderon <cit.>, and also for other hadrons with different spins <cit.>. In this work, we consider the inclusion of anomalous dimensions in the conformal dimension of boundary operators in the holographic hard wall model. As is well known <cit.>, anomalous dimensions appear in QCD loop corrections, as in the BFKL pomeron <cit.>, as well as in string theory and the AdS/CFT correspondence (see, e. g. <cit.>). As we show here, the introduction of anomalous dimensions in the HW allows the linearization of the corresponding Regge trajectory associated with even glueballs and compare it with the pomeron. The results obtained for the glueball masses are also compatible with other approaches, as lattice QCD and the original HW model. This work is organized as follows. In section II, we briefly review the original HW model applied to even glueballs. In section III, we introduce anomalous dimensions in the HW and obtain a linear Regge trajectory for even glueball masses and compare it with the pomeron. The glueball masses are compared with the original HW and with lattice data. In section IV, we present our conclusions. § ORIGINAL HARD WALL MODEL The original hard wall model <cit.> consists of fields defined in the bulk of an AdS slice, 0≤ z≤ z_ max of AdS_5 space with metric, Eq. (<ref>), and action ∫ dz d^4x √(-g) L, where L is the Lagrangian of the fields. The solutions for scalar fields with mass μ satisfy [ z^3 ∂_z 1/z^3∂_z + η^μν∂_μ∂_ν - (μ R )^2/ z^2 ] ϕ = 0 . According to the AdS/CFT correspondence <cit.>, these fields are dual to boundary operators O with dimension Δ such that Δ = 2 + √( 4 + (μ R )^2 ) . Considering plane wave solutions in the four dimensional coordinates x⃗ and t, one can write the solutions as ϕ ( x, z ) = C_ν , k e^-i P.x z^2 J_ν ( u_ν , k z ) , where J_ν(w) are Bessel functions of order ν, and the discrete modes u_ν , k (k =1,2,... ) are determined by the boundary conditions imposed at z=z_ max. For Dirichlet boundary conditions ϕ ( z =z_ max ) = 0, one finds that the discrete modes u_ν , k (k =1,2,... ) are given by u_ν , k = χ__ν , k/z_ max = χ__ν , k Λ_ QCD ; J_ν ( χ__ν , k ) = 0 , such that the glueball masses are identified with these discrete modes discrete modes u_ν , k and porportional to the zeros of the Bessel functions χ__ν , k. The scale Λ_ QCD is usually fixed using some experimental or lattice data. Scalar glueball operators O_4 = F^2 defined on the four dimensional boundary have dimension 4, and then are described by massless (μ=0) scalar fields in AdS_5 space, in accordance with the AdS/CFT relation (<ref>). In this case, the Bessel equation (<ref>), implies that ν=2 in the solution (<ref>). Then, the k indices label the radial excitations of the 0^++ glueball states with masses proportional to the zeros of the Bessel function J_2(w) <cit.>. The hard wall model can also describe spin excitations of the glueball states. In this case, one considers the glueballs operators O_4 + L = F D_{μ_1...D_μ_L } F with spin L which have dimension 4 + L. The de Teramond and Brodsky proposal <cit.> is that these glueball operators with spin L are described by massive scalar fields in the AdS such that ( μ R )^2 = L (L + 4 ) , which combined with the AdS/CFT prescription for the conformal dimension Δ, Eq. (<ref>), gives Δ = 4 + L . Using this relation, we are led to solutions of the equation of motion, Eq. (<ref>), proportional to J_2+L(w), since now ν=2+L. Imposing Dirichlet boundary conditions, one finds the discrete modes u_2+L,k=χ_2+L,k Λ_ QCD, where J_2+L(χ_2+L,k)=0. In Refs. <cit.> these prescriptions were used to calculate glueball masses for the states L= 2, 4, 6, 8, 10 and L= 3,5,7,9,11, and the corresponding (non-linear) Regge trajectories were obtained and compared to the pomeron and the odderon, respectively. § LINEAR HARD WALL MODEL In order to obtain linear Regge trajectories from the hard wall model, we need to change the relation between the boundary glueball operator conformal dimension and its angular momentum, Eq. (<ref>). Despite this is a good, and in some sense, intuitive formula, this relation leads to the problem of non-linear Regge trajectory in L× M^2 graph. This is a problem inherent to the original HW model, and we will discuss now the reasons. Assuming the duality between these modes in the AdS_5 slice and the glueball operators, the scalar glueball 0^++ is related to the massless scalar. So its mass is proportional to χ_2,1. Fixing k=1, for example, we can study how L and M are related. We start noticing that (<ref>) relates in a linear way the mass (proportional to u) and the Bessel function zero of index 2+L. However, the first zero of the Bessel function of index L grows in a way asymptotically linear with L. For this reason, the mass (related to u) grows asymptotically linear with L and for this reason, the graph L× M^2 is inherently non-linear. Our proposal here, is to include anomalous dimensions in the conformal dimension of the boundary glueball operators, modifying the relation between Δ and L. In fact, there is a physical reason for this in QCD, the so called anomalous dimension that leads to corrections in the conformal dimensions of the relevant operators. In the present work, we did not attempt to actually calculate, from first principles, the corrections predicted by QCD for the conformal dimension of such operators. Nevertheless, based on the fact that such corrections exist, we will propose a phenomenological formula that replaces relation (<ref>). Based on the above arguments about the zeros of the Bessel functions we expect that a relation such that Δ≈√(L) would, asymptotically lead to a linear L× M^2 graph. So, our proposal is that an effective conformal dimension of the glueball operators with spin L in the boundary satisfies Δ_ eff=aL^α+b, where a, b and α are constants to be determined numerically, in a way to better linearize the Regge trajectory comparable with the pomeron and obtain masses for glueballs compatible with lattice QCD. As proposed by Landshoff, we consider that the scalar glueball 0^++ does not belong to the pomeron trajectory, and for that reason, we assume that the conformal dimension of such operator has no corrections and is Δ_0^++=4. Furthermore, we will use its mass (obtained by lattice QCD) as an input, so that we can obtain the masses of the even spin excitations. So, in order to obtain the linear hard wall model, we start replacing equation (<ref>) in the AdS/CFT conformal dimension for massive scalar fields (<ref>). The solutions still follow from Eq. (<ref>) and are given by (<ref>). To obtain the constants a, b and α we use numerical analysis to fit the pomeron trajectory (<ref>). We work with 50 points, from L=2 to L=100 and use as an input the mass of 0^++. Based on results of lattice QCD we assume that M_0^++=1.59. With Dirichlet boundary condition we use equation (<ref>) in order to obtain the masses of even spin excitations. The best fit for the pomeron trajectory is obtained for a=6.20, b=-3.35 and α=1/2 such that the effective conformal dimension in the linear HW model is given by Δ_ eff=6.20√(L)-3.35. The masses obtained in the linear HW are presented in Table (<ref>) together with the results from the original HW as well as lattice QCD data <cit.>. In this table we also show the effective conformal dimension of the glueball operators L^++ in the linear HW model, according to Eq. (<ref>). The conformal dimension of 0^++ is assumed to be four. In Figure (<ref>), we plot the Regge trajectory from L=2 to L=100, corresponding to the best fit represented by Eq. (<ref>). As expected, this plot is, in good approximation, asymptotically linear. The Regge trajectory of such graph is L=0.25M^2 + 1.08, in perfect agreement with the experimental pomeron trajectory (<ref>). As a double check of the linearization procedure discussed above, one can also use the effective conformal dimension, Eq. (<ref>), and consider a different number of points to build the Regge trajectory. For example, if we use 5 points starting from L=2 to L=10 rather than the 50 points in Figure <ref>, the calculated masses within the linear HW model, as expected, do not change. However, although we can yet obtain a linear Regge trajectory it no longer matches exactly the pomeron trajectory. Figure (<ref>) shows the graph L× M^2 for 5 points from L=2 to L=10. The Regge trajectory of such graph is L=0.26M^2 +0.88. This trajectory is similar to that obtained in Ref. <cit.> within the original hard wall model. § CONCLUSIONS In this work we propose a linear hard wall model leading to linear Regge trajectories. The original HW model is well known for producing non-linear Regge trajectories. The main modification introduced in the linear model is a different relation between the conformal dimension and the spin of the QCD boundary operator. This is motivated by the inclusion of anomalous dimensions as is usual in QCD, strings, as well as in the AdS/CFT correspondence. We propose an effective conformal dimension, Eq. (<ref>) with three parameters that can be fixed from experimental or lattice data. In particular, we used this proposal to fit the pomeron Regge trajectory, Eq. (<ref>), such that even glueball masses are also reproduced. The obtained Regge trajectory fits exactly that of the pomeron. The masses compare well to lattice data. This procedure can be used for other hadrons as mesons and baryons. This is under development and will appear soon in the literature. 99 Landshoff:2001pp P. V. Landshoff, “Pomerons,”, published in “Elastic and Difractive Scattering" Proceedings, Ed. V. Kundrat and P. Zavada, 2002, arXiv:hep-ph/0108156. Meyer:2004jc H. B. Meyer and M. J. Teper, “Glueball Regge trajectories and the pomeron: A Lattice study,” Phys. Lett. B 605, 344-354 (2005) doi:10.1016/j.physletb.2004.11.036 [arXiv:hep-ph/0409183 [hep-ph]]. Fadin:1998py V. S. Fadin and L. N. Lipatov, “BFKL pomeron in the next-to-leading approximation,” Phys. Lett. B 429, 127-134 (1998) doi:10.1016/S0370-2693(98)00473-0 [arXiv:hep-ph/9802290 [hep-ph]]. Lebiedowicz:2023mhe P. Lebiedowicz, O. Nachtmann and A. Szczurek, “Central exclusive diffractive production of a single photon in high-energy proton-proton collisions within the tensor-Pomeron approach,” Phys. Rev. D 107, no.7, 074014 (2023) doi:10.1103/PhysRevD.107.074014 [arXiv:2302.07192 [hep-ph]]. Gross:2022hyw F. Gross, E. Klempt, S. J. Brodsky, A. J. Buras, V. D. Burkert, G. Heinrich, K. Jakobs, C. A. Meyer, K. Orginos and M. Strickland, et al. “50 Years of Quantum Chromodynamics,” [arXiv:2212.11107 [hep-ph]]. Aharony:1999ti O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, “Large N field theories, string theory and gravity,” Phys. Rept. 323, 183-386 (2000) doi:10.1016/S0370-1573(99)00083-6 [arXiv:hep-th/9905111 [hep-th]]. Polchinski:2001tt J. Polchinski and M. J. Strassler, “Hard scattering and gauge / string duality,” Phys. Rev. Lett. 88, 031601 (2002) doi:10.1103/PhysRevLett.88.031601 [arXiv:hep-th/0109174 [hep-th]]. Polchinski:2002jw J. Polchinski and M. J. Strassler, “Deep inelastic scattering and gauge / string duality,” JHEP 05, 012 (2003) doi:10.1088/1126-6708/2003/05/012 [arXiv:hep-th/0209211 [hep-th]]. Gubser:2002tv S. S. Gubser, I. R. Klebanov and A. M. Polyakov, “A Semiclassical limit of the gauge / string correspondence,” Nucl. Phys. B 636, 99-114 (2002) doi:10.1016/S0550-3213(02)00373-5 [arXiv:hep-th/0204051 [hep-th]]. Csaki:1998qr C. Csaki, H. Ooguri, Y. Oz and J. Terning, “Glueball mass spectrum from supergravity,” JHEP 01, 017 (1999) doi:10.1088/1126-6708/1999/01/017 [arXiv:hep-th/9806021 [hep-th]]. deMelloKoch:1998vqw R. de Mello Koch, A. Jevicki, M. Mihailescu and J. P. Nunes, “Evaluation of glueball masses from supergravity,” Phys. Rev. D 58, 105009 (1998) doi:10.1103/PhysRevD.58.105009 [arXiv:hep-th/9806125 [hep-th]]. Hashimoto:1998if A. Hashimoto and Y. Oz, “Aspects of QCD dynamics from string theory,” Nucl. Phys. B 548, 167-179 (1999) doi:10.1016/S0550-3213(99)00120-0 [arXiv:hep-th/9809106 [hep-th]]. Csaki:1998cb C. Csaki, Y. Oz, J. Russo and J. Terning, “Large N QCD from rotating branes,” Phys. Rev. D 59, 065012 (1999) doi:10.1103/PhysRevD.59.065012 [arXiv:hep-th/9810186 [hep-th]]. Minahan:1998tm J. A. Minahan, “Glueball mass spectra and other issues for supergravity duals of QCD models,” JHEP 01, 020 (1999) doi:10.1088/1126-6708/1999/01/020 [arXiv:hep-th/9811156 [hep-th]]. Brower:2000rp R. C. Brower, S. D. Mathur and C. I. Tan, “Glueball spectrum for QCD from AdS supergravity duality,” Nucl. Phys. B 587, 249-276 (2000) doi:10.1016/S0550-3213(00)00435-1 [arXiv:hep-th/0003115 [hep-th]]. Caceres:2000qe E. Caceres and R. Hernandez, “Glueball masses for the deformed conifold theory,” Phys. Lett. B 504, 64-70 (2001) doi:10.1016/S0370-2693(01)00278-7 [arXiv:hep-th/0011204 [hep-th]]. Boschi-Filho:2002xih H. Boschi-Filho and N. R. F. Braga, “Gauge / string duality and scalar glueball mass ratios,” JHEP 05, 009 (2003) doi:10.1088/1126-6708/2003/05/009 [arXiv:hep-th/0212207 [hep-th]]. Boschi-Filho:2002wdj H. Boschi-Filho and N. R. F. Braga, “QCD / string holographic mapping and glueball mass spectrum,” Eur. Phys. J. C 32, 529-533 (2004) doi:10.1140/epjc/s2003-01526-4 [arXiv:hep-th/0209080 [hep-th]]. Apreda:2003sy R. Apreda, D. E. Crooks, N. J. Evans and M. Petrini, “Confinement, glueballs and strings from deformed AdS,” JHEP 05, 065 (2004) doi:10.1088/1126-6708/2004/05/065 [arXiv:hep-th/0308006 [hep-th]]. Amador:2004pz X. Amador and E. Caceres, “Spin two glueball mass and glueball regge trajectory from supergravity,” JHEP 11, 022 (2004) doi:10.1088/1126-6708/2004/11/022 [arXiv:hep-th/0402061 [hep-th]]. Evans:2005ip N. Evans, J. P. Shock and T. Waterson, “Towards a perfect QCD gravity dual,” Phys. Lett. B 622, 165-171 (2005) doi:10.1016/j.physletb.2005.07.014 [arXiv:hep-th/0505250 [hep-th]]. Caceres:2005yx E. Caceres and C. Nunez, “Glueballs of super Yang-Mills from wrapped branes,” JHEP 09, 027 (2005) doi:10.1088/1126-6708/2005/09/027 [arXiv:hep-th/0506051 [hep-th]]. Boschi-Filho:2005xct H. Boschi-Filho, N. R. F. Braga and H. L. Carrion, “Glueball Regge trajectories from gauge/string duality and the Pomeron,” Phys. Rev. D 73, 047901 (2006) doi:10.1103/PhysRevD.73.047901 [arXiv:hep-th/0507063 [hep-th]]. FolcoCapossoli:2013eao E. Folco Capossoli and H. Boschi-Filho, “Odd spin glueball masses and the Odderon Regge trajectories from the holographic hardwall model,” Phys. Rev. D 88, no.2, 026010 (2013) doi:10.1103/PhysRevD.88.026010 [arXiv:1301.4457 [hep-th]]. Sakai:2004cn T. Sakai and S. Sugimoto, “Low energy hadron physics in holographic QCD,” Prog. Theor. Phys. 113, 843-882 (2005) doi:10.1143/PTP.113.843 [arXiv:hep-th/0412141 [hep-th]]. Sakai:2005yt T. Sakai and S. Sugimoto, “More on a holographic dual of QCD,” Prog. Theor. Phys. 114, 1083-1118 (2005) doi:10.1143/PTP.114.1083 [arXiv:hep-th/0507073 [hep-th]]. deTeramond:2005su G. F. de Teramond and S. J. Brodsky, “Hadronic spectrum of a holographic dual of QCD,” Phys. Rev. Lett. 94, 201601 (2005) doi:10.1103/PhysRevLett.94.201601 [arXiv:hep-th/0501022 [hep-th]]. Ghoroku:2005vt K. Ghoroku, N. Maru, M. Tachibana and M. Yahiro, “Holographic model for hadrons in deformed AdS(5) background,” Phys. Lett. B 633, 602-606 (2006) doi:10.1016/j.physletb.2005.12.004 [arXiv:hep-ph/0510334 [hep-ph]]. Erlich:2005qh J. Erlich, E. Katz, D. T. Son and M. A. Stephanov, “QCD and a holographic model of hadrons,” Phys. Rev. Lett. 95, 261602 (2005) doi:10.1103/PhysRevLett.95.261602 [arXiv:hep-ph/0501128 [hep-ph]]. DaRold:2005mxj L. Da Rold and A. Pomarol, “Chiral symmetry breaking from five dimensional spaces,” Nucl. Phys. B 721, 79-97 (2005) doi:10.1016/j.nuclphysb.2005.05.009 [arXiv:hep-ph/0501218 [hep-ph]]. Karch:2006pv A. Karch, E. Katz, D. T. Son and M. A. Stephanov, “Linear confinement and AdS/QCD,” Phys. Rev. D 74, 015005 (2006) doi:10.1103/PhysRevD.74.015005 [arXiv:hep-ph/0602229 [hep-ph]]. Brodsky:2006uqa S. J. Brodsky and G. F. de Teramond, “Hadronic spectra and light-front wavefunctions in holographic QCD,” Phys. Rev. Lett. 96, 201601 (2006) doi:10.1103/PhysRevLett.96.201601 [arXiv:hep-ph/0602252 [hep-ph]]. Hata:2007mb H. Hata, T. Sakai, S. Sugimoto and S. Yamato, “Baryons from instantons in holographic QCD,” Prog. Theor. Phys. 117, 1157 (2007) doi:10.1143/PTP.117.1157 [arXiv:hep-th/0701280 [hep-th]]. Forkel:2007cm H. Forkel, M. Beyer and T. Frederico, “Linear square-mass trajectories of radially and orbitally excited hadrons in holographic QCD,” JHEP 07, 077 (2007) doi:10.1088/1126-6708/2007/07/077 [arXiv:0705.1857 [hep-ph]]. Gursoy:2007cb U. Gursoy and E. Kiritsis, “Exploring improved holographic theories for QCD: Part I,” JHEP 02, 032 (2008) doi:10.1088/1126-6708/2008/02/032 [arXiv:0707.1324 [hep-th]]. Gursoy:2007er U. Gursoy, E. Kiritsis and F. Nitti, “Exploring improved holographic theories for QCD: Part II,” JHEP 02, 019 (2008) doi:10.1088/1126-6708/2008/02/019 [arXiv:0707.1349 [hep-th]]. Erdmenger:2007cm J. Erdmenger, N. Evans, I. Kirsch and E. Threlfall, “Mesons in Gauge/Gravity Duals - A Review,” Eur. Phys. J. A 35, 81-133 (2008) doi:10.1140/epja/i2007-10540-1 [arXiv:0711.4467 [hep-th]]. Vega:2008af A. Vega and I. Schmidt, “Scalar hadrons in AdS(5) x S**5,” Phys. Rev. D 78, 017703 (2008) doi:10.1103/PhysRevD.78.017703 [arXiv:0806.2267 [hep-ph]]. dePaula:2008fp W. de Paula, T. Frederico, H. Forkel and M. Beyer, “Dynamical AdS/QCD with area-law confinement and linear Regge trajectories,” Phys. Rev. D 79, 075019 (2009) doi:10.1103/PhysRevD.79.075019 [arXiv:0806.3830 [hep-ph]]. Abidin:2009hr Z. Abidin and C. E. Carlson, “Nucleon electromagnetic and gravitational form factors from holography,” Phys. Rev. D 79, 115003 (2009) doi:10.1103/PhysRevD.79.115003 [arXiv:0903.4818 [hep-ph]]. Gutsche:2011vb T. Gutsche, V. E. Lyubovitskij, I. Schmidt and A. Vega, “Dilaton in a soft-wall holographic approach to mesons and baryons,” Phys. Rev. D 85, 076003 (2012) doi:10.1103/PhysRevD.85.076003 [arXiv:1108.0346 [hep-ph]]. Li:2012ay D. Li, M. Huang and Q. S. Yan, “A dynamical soft-wall holographic QCD model for chiral symmetry breaking and linear confinement,” Eur. Phys. J. C 73, 2615 (2013) doi:10.1140/epjc/s10052-013-2615-3 [arXiv:1206.2824 [hep-th]]. Brodsky:2014yha S. J. Brodsky, G. F. de Teramond, H. G. Dosch and J. Erlich, “Light-Front Holographic QCD and Emerging Confinement,” Phys. Rept. 584, 1-105 (2015) doi:10.1016/j.physrep.2015.05.001 [arXiv:1407.8131 [hep-ph]]. Sonnenschein:2016pim J. Sonnenschein, “Holography Inspired Stringy Hadrons,” Prog. Part. Nucl. Phys. 92, 1-49 (2017) doi:10.1016/j.ppnp.2016.06.005 [arXiv:1602.00704 [hep-th]]. FolcoCapossoli:2019imm E. Folco Capossoli, M. A. Martín Contreras, D. Li, A. Vega and H. Boschi-Filho, “Hadronic spectra from deformed AdS backgrounds,” Chin. Phys. C 44, no.6, 064104 (2020) doi:10.1088/1674-1137/44/6/064104 [arXiv:1903.06269 [hep-ph]]. Afonin:2020msa S. S. Afonin, “Towards reconciling the holographic and lattice descriptions of radially excited hadrons,” Eur. Phys. J. C 80, no.8, 723 (2020) doi:10.1140/epjc/s10052-020-8306-y [arXiv:2008.05610 [hep-ph]]. Rinaldi:2021dxh M. Rinaldi and V. Vento, “Meson and glueball spectroscopy within the graviton soft wall model,” Phys. Rev. D 104, no.3, 034016 (2021) doi:10.1103/PhysRevD.104.034016 [arXiv:2101.02616 [hep-ph]]. Grigoryan:2007vg H. R. Grigoryan and A. V. Radyushkin, “Form Factors and Wave Functions of Vector Mesons in Holographic QCD,” Phys. Lett. B 650, 421-427 (2007) doi:10.1016/j.physletb.2007.05.044 [arXiv:hep-ph/0703069 [hep-ph]]. Mamo:2021cle K. A. Mamo and I. Zahed, “Neutrino-nucleon DIS from holographic QCD: PDFs of sea and valence quarks, form factors, and structure functions of the proton,” Phys. Rev. D 104, no.6, 066010 (2021) doi:10.1103/PhysRevD.104.066010 [arXiv:2102.00608 [hep-ph]]. Teper:1997am M. J. Teper, “Physics from the lattice: Glueballs in QCD: Topology: SU(N) for all N,” [arXiv:hep-lat/9711011 [hep-lat]]. Morningstar:1997ff C. J. Morningstar and M. J. Peardon, “Efficient glueball simulations on anisotropic lattices,” Phys. Rev. D 56, 4043-4061 (1997) doi:10.1103/PhysRevD.56.4043 [arXiv:hep-lat/9704011 [hep-lat]]. Morningstar:1999rf C. J. Morningstar and M. J. Peardon, “The Glueball spectrum from an anisotropic lattice study,” Phys. Rev. D 60, 034509 (1999) doi:10.1103/PhysRevD.60.034509 [arXiv:hep-lat/9901004 [hep-lat]].
http://arxiv.org/abs/2306.03527v1
20230606092252
Rec4Ad: A Free Lunch to Mitigate Sample Selection Bias for Ads CTR Prediction in Taobao
[ "Jingyue Gao", "Shuguang Han", "Han Zhu", "Siran Yang", "Yuning Jiang", "Jian Xu", "Bo Zheng" ]
cs.IR
[ "cs.IR", "cs.LG" ]
jingyue.gjy,shuguang.sh,zhuhan.zh,siran.ysr,mengzhu.jyn,xiyu.xj,[email protected] Alibaba Group Beijing China Click-Through Rate (CTR) prediction serves as a fundamental component in online advertising. A common practice is to train a CTR model on advertisement (ad) impressions with user feedback. Since ad impressions are purposely selected by the model itself, their distribution differs from the inference distribution and thus exhibits sample selection bias (SSB) that affects model performance. Existing studies on SSB mainly employ sample re-weighting techniques which suffer from high variance and poor model calibration. Another line of work relies on costly uniform data that is inadequate to train industrial models. Thus mitigating SSB in industrial models with a uniform-data-free framework is worth exploring. Fortunately, many platforms display mixed results of organic items (i.e., recommendations) and sponsored items (i.e., ads) to users, where impressions of ads and recommendations are selected by different systems but share the same user decision rationales. Based on the above characteristics, we propose to leverage recommendations samples as a free lunch to mitigate SSB for ads CTR model (Rec4Ad). After elaborating data augmentation, Rec4Ad learns disentangled representations with alignment and decorrelation modules for enhancement. When deployed in Taobao display advertising system, Rec4Ad achieves substantial gains in key business metrics, with a lift of up to +6.6% CTR and +2.9% RPM. Rec4Ad: A Free Lunch to Mitigate Sample Selection Bias for Ads CTR Prediction in Taobao Jingyue Gao, Shuguang Han, Han Zhu, Siran Yang, Yuning Jiang, Jian Xu, Bo Zheng July 31, 2023 ======================================================================================== § INTRODUCTION For large-scale e-commerce platforms like Taobao, online advertising contributes a large portion of revenue. As advertisers typically pay for user clicks on advertisements (ads), a common practice is to rank them based on expected Cost Per Mille (eCPM) <cit.>: eCPM = 1000 × pctr × bid, where pctr is the predicted Click-Through Rate (CTR), and bid denotes the price for each click. Hence, CTR prediction serves as a fundamental component for online advertising systems. As shown in Fig. <ref>, a production CTR model scores all candidates and selects the top few based on Eq. (<ref>) for display. The displayed ads as well as user feedback (i.e., click/non-click) are then recorded, with which we continuously train new models. Due to its simplicity and robustness, such a training paradigm is widely adopted by many industrial systems <cit.>. However, since the displayed ads are not uniformly sampled from all candidates but purposely selected by the model itself, the training data distribution could be skewed from the inference distribution. This is widely known as the sample selection bias (SSB) problem  <cit.>. It violates the classical assumption of training-inference consistency and may potentially affect the model performance. Recent efforts <cit.> have been devoted to alleviating SSB in ranking systems. Methods based on Inverse Propensity Scoring <cit.> recover the underlying distribution by re-weighting the training samples. Despite theoretical soundness, they require a propensity model that accurately estimates sample occurrence probability, which is difficult to learn in dynamic and complicated environments. Moreover, sample re-weighting may yield un-calibrated predictions that are problematic for ads CTR models <cit.>. Another line of work collects uniform data via random policy, which helps train an unbiased imputation model for non-displayed items <cit.> or guide the CTR model training via knowledge distillation <cit.>. However, even small production traffic (e.g., 1%) of the uniform policy will severely cause degraded user experience and revenue loss, and the obtained uniform data of this magnitude is insufficient for training industrial models with billions of parameters. With these issues, we investigate how to mitigate SSB for industrial CTR models under a uniform-data-free framework. Inspired by causal learning <cit.>, CTR prediction can be framed as the problem of treatment effect estimation. As in Fig. <ref>(B), sample features compose unit 𝐗, whether to display it acts as a binary treatment 𝐓 and click is the outcome 𝐘 to estimate. The root cause of SSB is attributed to existence of confounders Δ (e.g., item popularity) in 𝐗 that affect both 𝐓 and 𝐘. Recent studies <cit.> show that confounders mislead models to capture spurious correlations between the unit features and the outcome, which are non-causal and hurt generalization over the inference distribution. Hence, it is promising to mitigate SSB by disentangling confounders Δ from real user-item interest Γ in sample features, which is non-trivial in absence of randomized controlled trials (i.e., uniform data) <cit.>. As shown in Fig. <ref>, many platforms <cit.> display mixed results of sponsored items and organic items that are independently selected by advertising and recommendation systems. For clarity, we refer to sponsored items as ads. We refer to organic items as recommendations. This scenario has two characteristics: * Shared decision rationales. With a unified interface design, users are unaware of whether items are sponsored or organic, making their click decision determined by real user-item interest Γ rather than sources of displayed items. * Different selection mechanisms. Advertising and recommendation systems serve different business targets (e.g., revenue/clicks/dwell time) <cit.> and have different selection mechanisms as verified in Sec. <ref>. Thus their SSB-related confounders are rarely overlapped, making system-specific confounders Δ_𝐚𝐝/Δ_𝐫𝐞𝐜 capture a substantial portion of Δ. The above characteristics make it possible to disentangle Δ_𝐚𝐝/Δ_𝐫𝐞𝐜 and Γ by jointly considering samples from two sources. Compared with the uniform data, recommendation samples are of a comparable or even larger magnitude than ads samples and persist without revenue loss, making it a free lunch worthy of exploitation. Though few if any confounders common in two systems could still remain with Γ, disentangling system-specific Δ_𝐚𝐝/Δ_𝐫𝐞𝐜 from Γ is already a meaningful step towards mitigating SSB, especially when uniform data is unavailable in industrial advertising systems. To this end, we propose to leverage Recommendation samples to mitigate SSB For Ads CTR prediction (Rec4Ad). Under this framework, recommendation samples are retrieved and mixed with ad samples for training. With raw feature embeddings, we elaborately design the representation disentanglement mechanism to dissect system-specific confounders and system-invariant user-item interest across two systems. Specifically, this mechanism consists of an alignment module and a decorrelation module with various regularizations. Finally we make prediction with disentangled and enhanced representation. Rec4Ad has been deployed to serve the main traffic of Taobao display advertising system since July of 2022. Our contributions are summarized as follows: * We analyze the existence of SSB in CTR prediction and point out the potential to leverage recommendation samples to mitigate such bias in absence of uniform data. * We propose a novel framework named Rec4Ad, which jointly considers the recommendation and ads samples in learning disentangled representations that dissect system-specific confounders and system-invariant user-item interest. * We conduct offline and online experiments to validate the effectiveness of Rec4Ad that achieves substantial gains in business metrics (up to +6.6% CTR and +2.9% RPM). § PRELIMINARY §.§ Problem Formulation Input: The input includes a user set 𝒰, an ad set 𝒜, an item set ℐ, user-ad impressions 𝒟^𝐚𝐝, and user-item simpressions 𝒟^𝐫𝐞𝐜 * Each user u ∈𝒰 is represented by a set of features {u_1, ..., u_m} including user profile features (e.g., age and gender) and historical behaviors (e.g., click and purchase). * Each ad a ∈𝒜 is a promotion campaign for a sponsored item i ∈ℐ. Besides item-level features like category and brand, a also has campaign-level features including ID and historical statistics, denoted by {a_1,...,a_n}. * Each impression in D^ad is a tuple (u,a,c,y) describing when the advertising system displayed a to u under context c = {c_1,..,c_k} such as time and device, user clicked it (y=1) or not (y=0). As for D^rec, the tuple changes to (u,i,c,y) logged by the recommendation engine. Output: We aim to learn a model f that predicts the click probability f(u,a,c) if displaying a ∈𝒜 to user u ∈𝒰 under context c. §.§ Analysis of Sample Selection Bias §.§.§ SSB in Ads Impressions SSB happens when each candidate does not share equal opportunities for impression. To examine its existence, we define the metric of impression ratio (IR) to measure the opportunity of each ad in our system: IR(a) = #sessions where a was displayed/#sessions where a was in the candidate set, where a session refers to a user request. We first calculate IR for each ad in D^ad, sort them by IR in descending order, and then divide them equally into 12 groups. Fig. <ref> (Left) shows the average IR for each group (the red line) compared with the ideal uniform data (the blue line). We find that impressions on D^ad are distributed among different ads in an extremely imbalanced way, where the IR of the first group is nearly 200 times that of the last group. §.§.§ Influence of SSB Since base CTR model is trained on D^ad, we analyze its ranking and calibration performance under the imbalanced ad impressions. For ranking performance, we use the metric of AUC, and the calibration performance is measured by the Expected Calibration Error (ECE) <cit.>. Details of two metrics are introduced in Sec. <ref>. Fig. <ref> (Right) shows online model performance on different groups of ads with descending IR. It is observed that model tends to perform worse on ads with lower IR than on those with higher IR. It is consistent with our assumption that model does not generalize well on ads with few impressions and validates the necessity to handle SSB for improvement. §.§.§ Mitigating SSB with Recommendation Samples. As defined before, each ad in 𝒜 corresponds to an organic item in ℐ. Thus we investigate how impressions in D^rec are distributed among organic items with ad counterparts. From Fig. <ref> (Left), we find that though some groups of ads have few impressions in D^ad, their corresponding items contribute an important portion of impressions in D^rec. It is attributed to different selection mechanism behind D^ad and D^rec. We conduct a simulated study on D^ad for verification, which changes the ranking function from pctr*bid to pctr (commonly adopted in recommendation systems) and re-displays top-10 ads. Fig. <ref> (Right) illustrates IR of each original rank under two mechanisms. It is clear that impression distribution is changed, where ads with low rank in original list have opportunities to be displayed under another mechanism. Above empirical analysis show that it is promising to leverage recommendation samples in D^rec to mitigate SSB in D^ad. § METHODOLOGY Fig. <ref> shows two stages of Rec4Ad in deployment: data augmentation and disentangled representation learning, which constructs and leverages recommendation (rec) samples, respectively. §.§ Data Augmentation §.§.§ Retrieving Recommendation Samples To ensure user experience, the percentage of ad impressions in all impressions is usually limited to a low threshold. Typically, we have |D^ad| ≪ |D^rec|, making it intractable to consume entire D^rec owing to multiplied training resources. Moreover, not all rec samples are useful for enhancing ads model due to difference between 𝒜 and ℐ. Thus we retrieve rec samples that are closely related to advertising system. Let I(a) ∈ℐ denote the item that a ∈𝒜 is advertised for. We define an item set ℐ^ad containing all items with related ads: ℐ^ad = {i∈𝐈|∃ a ∈𝒜,I(a)=i}. We discard rec samples whose items fail to occur in ℐ^ad, since they fail to provide complementary impressions that relate to any ad of interest. In this way, we retrieve a subset of rec samples: D^r̂êĉ = {(u,i,c,y) ∈ D^rec|i∈ℐ^ad}. §.§.§ Pseudo Sample Mapping The reasons to map retrieved rec samples to pseudo ad samples are two-fold. First, it allows rec samples and ad samples to have uniform input format, which facilitates efficient feature joining and batch processing. Second, pseudo samples scattering in the 𝒰×𝒜 space help learn a CTR prediction model for ads, compared with the 𝒰×ℐ space. In Fig. <ref>, we maintain an item-ads index where key is item i and values are their related ads A(i) = {a∈𝒜|I(a) = i}. To select an ad from A(i), we do not take their impressions into consideration, which avoids introducing selection bias in the advertising system. Instead, we adopt a recent-K-random strategy. We randomly select an ad a' from most recent K ads related to i, where K is a fixed hyper-parameter. After mapping, we obtain the set of pseudo samples: D^r̃ẽc̃ = {(u,a',c,y)|(u,i,c,y) ∈ D^r̂êĉ, a' ∈ A(i)}, §.§ Disentangled Representation Learning §.§.§ Original Representation we embed raw features of sample (u,a,c,y) into low-dimensional vectors [e(u),e(a),e(c)]. Operations like attention mechanism <cit.> are further employed to aggregate embeddings of user behavior sequences. We concatenate these results together to obtain intermediate representation 𝐞. Batch Normalization (BN) <cit.> is commonly used in training of industrial CTR models <cit.> to stabilize convergence. It calculate statistics over training data for normalization during serving. However, when incorporating D^r̃ẽc̃ into training, BN statistics are calculated based on ad and rec samples but only used to normalize ad samples during online serving. The distribution discrepancy between two kinds of samples weakens the effectiveness of BN. To deal with this problem, we design source-aware BN (SABN), which adaptively normalize samples according to their sources. Let s indicate which kind the sample is, SABN works as follows: SABN(e,s) = γ_se-μ_s/√(σ_s^2+ϵ) + β_s, s ∈{ad, rec}, where γ_s, β_s, μ_s, σ_s^2 are source-specific parameters for normalization. Then we feed the normalized representation 𝐞' into MLP (Multi-Layer Perception) layers for a compact representation 𝐱 that captures feature interactions among user, ad, and context. We add superscripts on representations (e.g., 𝐱^ad/𝐱^rec) to denote its source. §.§.§ Alignment Since users are usually unaware of the difference between ad and rec impressions, their click decisions can be assumed independent of underlying systems, which are commonly determined by their interest. To identify user-item interest Γ behind click decision, we propose to extract invariant representations shared between D^ad and D^r̃ẽc̃. In other words, samples in D^ad and D^r̃ẽc̃ should be indistinguishably distributed in the invariant representation space. To achieve this goal, we first apply projection layers over original representations of ad and rec samples: 𝐱^s_inv = MLP^s_inv(𝐱^s) ∈ℛ^d, s ∈{ad, rec} A direct method to align {𝐱^ad_inv} and {𝐱^rec_inv} is minimizing their Wasserstein or MMD distribution distance <cit.>. However, these metrics are computationally inefficient and hard to estimate accurately over mini-batches. Instead, we train a sample discriminator H to implicitly align them in an adversary way. Particularly, H is a binary classifier that predicts whether the sample is from D^ad or D^r̃ẽc̃ based on 𝐱_inv. Optimized with cross entropy loss, H aims to distinguish two kinds of samples as accurate as possible: ŝ = Sigmoid(MLP_H(𝐱_inv^s)), s∈{ad, rec}, L_A = -∑_D^adlog(ŝ)-∑_D^r̃ẽc̃log(1-ŝ). While H tries to minimize L_A during training, neural layers generating invariant representations aim to make {𝐱^ad_inv} and {𝐱^rec_inv} indistinguishable as much as possible, i.e., maximize L_A. To train these two parts simultaneously, we insert a gradient reverse layer (GRL) <cit.> between 𝐱_inv and the discriminator. In forward propagation, GRL acts as an identity transformation. In backward propagation, it reverses gradients from subsequent layers: Forward: GRL(𝐱_inv) = 𝐱_inv, Backward: ∂ L_A/∂𝐱_inv = -α∂ L_A/∂ GRL(𝐱_inv), where α controls the scale of reversion. In this way, we tightly align {𝐱^ad_inv} and {𝐱^rec_inv} in the invariant representation space. §.§.§ Decorrelation To separate system-specific confounders from original representation, we apply another set of projection layers: 𝐱^s_con = MLP^ad_con(𝐱^s) ∈ℛ^d, s ∈{ad, rec}. If without explicit constraints, 𝐱_con could still contain information shared across systems and prevent us from handling confounders specific to the ad system. To this end, we propose to add regularizations to further disentangle 𝐱_con and 𝐱_inv. Borrowing the idea that disentangled representations avoid encoding variations of each other <cit.>, we penalize the cross-correlation between two sets of representations. Specifically, let 𝐩_i denote the in-batch vector of i-th dimension of 𝐱_inv and 𝐪_j denote that of j-th dimension of 𝐱_con, their Pearson correlation can be calculated as: Cov(𝐩_i, 𝐪_j)) = [𝐩_i-𝐩̅_𝐢]^⊤[𝐪_j-𝐪̅_𝐣], Υ(𝐩_i, 𝐪_j) = Cov(𝐩_i, 𝐪_j)/√(Cov(𝐩_i, 𝐩_i)), where 𝐩̅_𝐢 and 𝐪̅_𝐣 denote in-batch mean of each dimension. Thus the objective of the decorrelation module are based on correlations of every pair of dimension cross 𝐱_inv and 𝐱_con: L_D = ∑_i=1^d∑_j=1^d [Υ(𝐩^ad_i, 𝐪^ad_j)^2 + Υ(𝐩^rec_i, 𝐪^rec_j)^2]. By optimizing L_D, 𝐱_con are encouraged to capture residual information independent from 𝐱_inv, i.e., system-specific confounders Δ_𝐚𝐝/Δ_𝐫𝐞𝐜 that are discarded by the alignment module. §.§ Prediction We reconstruct final representation based on disentangled representations to predict CTR. Previous studies show that non-causal associations also potentially contribute to prediction accuracy <cit.>, motivating us to consider 𝐱_con in reconstruction instead of directly ignoring it. For simplicity, we use the concatenation operator: 𝐱^s_new = 𝐱^s_inv⊕𝐱^s_con, s ∈{ad, rec}. With 𝐱_new^s, we make predictions for ad samples and pseudo samples with source-aware layers, where cross entropy loss L_C is optimized: ŷ^s = Sigmoid(MLP^s_pred(𝐱^s_new)), s ∈{ad, rec} L_C = ∑_D^ad∪ D^r̃ẽc̃[-ylog(ŷ^s)-(1-y)log(1-ŷ^s)]. Thus the objective function of Rec4Ad consists of the CTR prediction loss, the alignment loss and the decorrelation loss: L = L_C + λ_1L_A + λ_2L_D. § EXPERIMENTS §.§ Experimental Setup Taobao Production Dataset. We construct the dataset based on impression logs in two weeks of 2022/06 from Taobao advertising system and recommendation system. We use data of the first week for training, which contains ad and rec impressions collected under regular policy. The data of the next week are ad impressions collected under random policy of a small traffic following <cit.>, which is used to evaluate model performance against SSB. The training dataset contains 1.9 billion ad samples and 0.6 billion rec samples after retrieval, covering 0.2 billion users. The test dataset contains 18.9 million ad samples and 10.3 million users. Baselines. Rec4Ad is compared with following baselines. * Base. We adopt DIN <cit.> as the vanilla model which does not account for SSB. * DAG. The Data-Augmentation (DAG) method directly merges rec samples and ad samples to train the base model. * IPS <cit.>. It eliminates SSB by re-weighting samples with inverse propensity of ad impression. * IPS-C <cit.> It adds max-capping to IPS weight so that its variance can be reduced. * IV <cit.>. It employs user behaviors outside current system as instrumental variables for model debiasing. Metrics. For ranking ability, we use the standard AUC (Area Under the ROC Curve) metric for evaluation <cit.>. A higher AUC indicates better ranking performance. In practice, absolute improvement of AUC by 0.001 on the production dataset is considered significant, which empirically leads to an online lift of 1% CTR. For calibration, we evaluate models with the ECE <cit.> metric. We first equally partition the pctr range [0,1] into K buckets B_1,...,B_K. ECE can be calculated as follows: ECE = 1/|D|∑_k=1^K|∑_i=1^|D|(y_i-ŷ_i) 1(ŷ_i∈B_k)|, where 1(ŷ_i∈B_k) equals 1 only if ŷ_i∈B_k else 0. K is set to 100. A lower ECE here indicates better calibration performance. Implementation The feature embedding size is 16. We use Adam optimizer <cit.> with initial learning rate 0.001. The batch size is fixed to 6000. In data augmentation, we consider the most recent 3 ads for pseudo sample mapping. The dimensions d of 𝐱_inv and 𝐱_con is 128. The ratio α of gradient reverse layer in Eq. (<ref>) is 0.1. λ_1 and λ_2 for the alignment and the decorrelation loss in Eq. (<ref>) is 0.005 and 0.5. For tests of significance, each experiment is repeated 5 times by random initialization and we report the average as results. §.§ Experimental Results §.§.§ Overall Performance From Table <ref>, we find that Rec4Ad significantly performs better than all baselines. Specifically, it outperforms Base in terms of AUC by 0.0027 and outperforms the state-of-the-art IV by 0.0015. This demonstrates the effectiveness of our proposed framework in handling SSB. By dissecting confounders and user-item interest for enhanced representations, it works well over the inference space. Moreover, Rec4Ad successfully maintains even slightly better model calibration than Base, which also verifies its suitability for ads CTR prediction. We also observe that DAG performs worse than baseline both in AUC and ECE. The reason is ad samples and rec samples present different feature distributions and label distributions. Naive data augmentation actually amplifies the distribution discrepancy between training and inference. The original IPS yields worst AUC, while IPS-C with max-capping achieves higher AUC than Base. We attribute this phenomenon to high variance in estimation of propensity score. We also notice that ECE of IPS and IPS-C are all larger than 10^-3, which verifies that sample re-weighting could change label distribution and result in calibration issues of ads CTR prediction. §.§.§ Performance on Different Ad Groups In Section <ref>, we show that SSB leads model to perform badly on ads with low impression ratios. To validate whether Rec4Ad mitigates such influence, we compare Rec4Ad and three competitive baselines on specific ad groups. We sort ads in descending IR as defined in Eq. (<ref>), where the top 25% are selected as G_top representing ads with enough impressions and the bottom 25% are selected as G_bottom containing ads that are less represented in the training data. Table <ref> shows that Rec4Ad achieves best ranking and calibration performance on both G_top and G_bottom. The improvements over Base are greater on G_bottom with AUC increased by nearly 0.003 and ECE reduced by 0.001. Thus we conclude that Rec4Ad succeeds in mitigating SSB and boosts model performance on those long-tail ads. We also observe an interesting seesaw phenomenon about IPS-C, which also greatly improves metrics on G_bottom but yields worse performance on G_top compared with Base. It is because IPS-C explicitly imposes higher weights for samples with low-IR ads and lower weights for those with high-IR ads. By contrast, Rec4Ad exhibits its superiority that improvements on G_bottom are achieved without the cost of degraded performance on G_top. §.§.§ Ablation Study We analyze the effect of key components in Rec4Ad by comparing it with variants which remove SABN, the alignment module, and the decorrelation module, respectively. Fig. <ref> shows that after removing SABN, model calibration experiences an obvious degeneration. The reason is that representations of Rec and Ad samples are with different distributions, making it difficult to normalize them with shared BN parameters and leading to mis-scaled network activations as well as badly-calibrated predictions. Furthermore, we find that AUC even drops under 0.679 after removing the alignment module, validating that the alignment regularization is critical for co-training with ad and rec samples. It allows Rec4Ad to extract shared user-item interest behind user clicks and eliminate system-specific confounders from this part. The decorrelation module is also shown effective since the variant without this component performs worse than the default version. It is because splitting non-causal correlations alone in enhanced representations also potentially contributes to accurate predictions <cit.>. §.§.§ Study on Disentangled Representations As 𝐱_inv and 𝐱_con are expected to capture system-invariant and system-specific factors respectively, we aim to investigate their distributions over ad and rec samples. We randomly sample a hybrid batch and visualize learned representations using t-SNE <cit.>. As shown in Fig. <ref>, there is no significant difference between 𝐱_inv for ad and rec samples, suggesting the captured invariance. When it comes to 𝐱_con, we observe that ad samples and rec samples are mostly separated in different areas, indicating that this representation extracts system-specific factors from the training data, which we believe stems from the difference in their selection mechanisms. In the alignment module, we employs an adversary sample discriminator H to distinguish 𝐱_inv^ad and 𝐱_inv^rec. Thus the classification performance can be used as an effective proxy to quantitatively evaluate the goodness of 𝐱_inv. Fig. <ref> illustrates the adversary AUC during training. We observe that it increases at the early stage due to optimization of H. Then AUC gradually decreases as the training goes on, indicating Rec4Ad tries to generate representations that confuse H. Near the end of training, AUC converges to 0.5, which means ad and rec samples are indistinguishable on 𝐱_inv. §.§ Online Study We conduct online A/B Test between Rec4Ad and production baseline from July 1 to July 7 of 2022, each with 5% randomly-assigned traffic. Two key business metrics are used in evaluation: Click-Through Rate (CTR) and Revenue Per Mille (RPM), which corresponds to user experience and platform revenue, respectively. As shown in Table <ref>, Rec4Ad achieves substantial gains in two largest scenes of Taobao display advertising business, Homepage and Post-Purchase, demonstrating considerable business value of Rec4Ad. For long-tail ads with few impressions. Rec4Ad achieves up to 12.6% and 3.6% lift of CTR in two scenes, which are larger than the overall lift. Above results verify that Rec4Ad effectively mitigates SSB and brings solid online improvements. It has been successfully deployed in production environment to serve the main traffic of Taobao display advertising system since July of 2022. § CONCLUSION In this paper, we propose a novel framework which leverages Recommendation samples to help mitigate sample selection bias For Ads CTR prediction (Rec4Ad). Recommendation samples are first retrieved and mapped to pseudo samples. Ad samples and pseudo samples are jointly considered in learning disentangled representations that dissect system-specific confounders brought by selection mechanisms and system-invariant user-item interest. Alignment and decorrelation modules are included in above architecture. When deployed in Taobao display advertising system, Rec4Ad achieves substantial gains in key business metrics, with a lift of up to +6.6% CTR and +2.9% RPM. ACM-Reference-Format
http://arxiv.org/abs/2306.11828v1
20230620183725
Near-Optimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs
[ "Sayan Bhattacharya", "Peter Kiss", "Aaron Sidford", "David Wajc" ]
cs.DS
[ "cs.DS" ]
plain thmTheorem[section] cor[thm]Corollary prop[thm]Proposition conj[thm]Conjecture fact[thm]Fact lem[thm]Lemma lemma[thm]Lemma cla[thm]Claim claim[thm]Claim Def[thm]Definition obs[thm]Observation assum[thm]Assumption rem[thm]Remark inv[thm]Invariant thmTheoremtheorems claClaimclaims lemLemmalemmas factFactfacts #1#1 wrapper[1] [hidealllines=true, backgroundcolor=gray!20, leftmargin=0cm,innerleftmargin=0.3cm,innerrightmargin=0.3cm,innertopmargin=0.375cm,innerbottommargin=0.375cm,roundcorner=10pt] #1 #1←A[⇒ #1 –Aaron] #1←D[⇒ #1 –David] #1←P[⇒ #1 –Peter] #1←S[⇒ #1 –Sayan] 1]Sayan BhattacharyaSupported by Engineering and Physical Sciences Research Council, UK (EPSRC) Grant EP/S03353X/1. 1]Peter KissWork done in part while the author was visiting Max-Planck-Institut für Informatik. 2]Aaron SidfordSupported in part by a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research award, and a Sloan Research Fellowship. 3]David WajcWork done in part while the author was at Stanford University. [1]University of Warwick [2]Stanford University [3]Google Research Near-Optimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs [ ========================================================================== gobble We study dynamic (1-ϵ)-approximate rounding of fractional matchings—a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time O(ϵ^-1log^2 (^-1· n)), matching an (unconditional) recourse lower bound of Ω(ϵ^-1) up to logarithmic factors. Moreover, this algorithm's update time improves provided the minimum (non-zero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic partial rounding algorithms to increase this minimum weight, we obtain several algorithms that improve this dependence on n. For example, we give a high-probability randomized algorithm with Õ(ϵ^-1· (loglog n)^2)-update time against adaptive adversaries.[We use Soft-Oh notation, Õ, to suppress polylogarithmic factors in the argument, i.e., Õ(f)=O(f·poly(log f)).] Using our rounding algorithms, we also round known (1-)-decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on state-of-the-art algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almost-maximal matchings. arabic § INTRODUCTION Dynamic matching is one of the most central and well-studied dynamic algorithm problems. Here, a graph undergoes edge insertions and deletions, and we wish to quickly update a large matching (vertex-disjoint set of edges) following each such change to the graph. A cornerstone of numerous dynamic matching results is the dynamic relax-and-round approach: the combination of dynamic fractional matching algorithms <cit.> with dynamic rounding algorithms <cit.>. This dynamic fractional matching problem asks to maintain a vector ∈ℝ^E_≥ 0 such that x(v)∑_e∋ v x_e satisfies the fractional degree constraint x(v) ≤ 1 for all vertices v∈ V and x:= ∑_e x_e is large compared to the size of the largest (fractional) matching in the dynamic graph G=(V,E). The goal typically is to solve this problem while minimizing the amortized or worst-case time per edge update in G.[An algorithm has amortized update time f(n) if every sequence of t updates takes at most t· f(n) update time. If each operation takes at most f(n) time, it has worst-case update time f(n). Throughout, we focus on amortized update times, and so we omit this distinction.] For the rounding problem (the focus of this work), an abstract interface can be defined as follows. A dynamic rounding algorithm (for fractional matchings) is a data structure supporting the following operations: * (G = (V,E), ∈^E_≥ 0, ϵ∈ (0,1)): initializes the data structure for undirected graph G with vertices V and edges E, current fractional matching in G, and target error ϵ. * (e ∈ E, ν∈ [0,1]): sets x_e ν under the promise that the resulting is a fractional matching in G.[Invoking (e,0) essentially deletes e and subsequently invoking (e,ν) for ν > 0 essentially adds e back. So, G might as well be the complete graph on V. However, we find the notation G = (V, E) convenient.] The algorithm must maintain a matching M in the support of , x:={e∈ E| x_e>0}, such that M is a (1 - ϵ)-approximation with respect to x:=∑_e x_e, i.e., M ⊆x , M is a matching , and |M| ≥ (1 - ϵ)·x . The combination of fast fractional algorithms with fast dynamic rounding algorithms plays a key role in state-of-the-art time versus approximation trade-offs for the dynamic matching problem against an adaptive adversary <cit.>, including the recent breakthroughs of <cit.>. Here, a randomized algorithm works against an adaptive adversary (or is adaptive, for short) if its guarantees hold even when future updates depend on the algorithm's previous output and its internal state. Slightly weaker are algorithms, that allow updates to depend only on the algorithms' output. Note that deterministic algorithms are automatically adaptive. A major motivation to study dynamic algorithms is their black-box use as subroutines within other algorithms. (See discussions in, e.g., <cit.>.) Despite significant effort and success in designing and applying dynamic rounding algorithms, the update time of current (1-)-approximate dynamic rounding approaches are slower than an unconditional recourse (changes per update) lower bound of Ω(ϵ^-1) (<Ref>) by large (^-1, log n) factors. Consequently, rounding is a computational bottleneck for the running time of many state-of-the-art dynamic matching algorithms <cit.> and decremental (only allowing deletions) matching algorithms <cit.>. The question thus arises, can one design () optimal dynamic rounding algorithms for fractional matching? We answer this question in the affirmative in a strong sense. §.§ Our Contributions Our main results are deterministic and randomized dynamic fractional matching rounding algorithms for bipartite graphs that match the aforementioned simple recourse lower bound of Ω(^-1) up to logarithmic factors in and (sub-)logarithmic factors in n:=|V|. These results are summarized by the following theorem. The dynamic bipartite matching rounding problem admits: * A deterministic algorithm with Õ(^-1log n) time. * An algorithm with Õ(^-1· (loglog n)^2) time that is correct w.h.p. * An algorithm with Õ(^-1) expected time. The (G,,) time of each of these algorithms is O(· |x|) times its time. In contrast, prior approaches have update time at least Ω(^-4) (see <Ref>). Moreover, all previous adaptive algorithms with high probability (w.h.p.) or deterministic guarantees all have at least (poly)logarithmic dependence on n, as opposed to our (sub-)logarithmic dependence on n. General Graphs. In general graphs, one cannot round all fractional matchings (as defined) to integrality while only incurring a (1-) factor loss in value.[Consider the triangle graph with fractional values x_e=1/2 on all three edges; this fractional matching has value 3/2, while any integral matching in a triangle has size at most one. While adding additional constraints <cit.> avoids this issue, no dynamic fractional algorithms for the matching polytope in general graphs are currently known.] Nonetheless, it is know how to round “structured” (1/2-)-approximate dynamic fractional matchings <cit.> (see <Ref>) to obtain an integral (1/2-)-approximate matching <cit.>, and even almost maximal matchings <cit.>, as defined in <cit.> and restated below. A matching M in G is an -almost maximal matching (-AMM) if M is maximal with respect to some subgraph G[V∖ U] obtained by removing at most |U|≤·μ(G) vertices from G, where μ(G) is the maximum matching size in G. Such -AMM's are (1/2 - )-approximate with respect to the maximum matching <cit.>. Moreover, (almost) maximality of -AMM makes their maintenance a useful algorithmic subroutine <cit.>. Extending our approach to rounding the aforementioned well-structured, “near maximal” dynamic fractional matchings in general graphs <cit.>, we obtain faster -AMM algorithms, as follows (see <Ref> for formal statement). There exist dynamic algorithms maintaining -AMM in general graphs in update time Õ(^-3)+O(t_f + u_f· t_r), where t_f and u_f are the update time and number of calls to of any “structured” dynamic fractional matching algorithm, and t_r is the time for “partial” rounding. Furthermore, there exist dynamic partial rounding algorithms with the same update times and adaptivity as those of <Ref>. §.§.§ Applications Applying our rounding algorithms to known fractional algorithms yields a number of new state-of-the-art dynamic matching results. For example, by a black-box application of <Ref>, we deterministically round known decremental (fractional) bipartite matching algorithms <cit.> with no asymptotic ovehead, yielding faster (1-)-approximate decremental bipartite matching algorithms. We also discuss how a variant of <Ref> together with the general-graph decremental algorithm of <cit.> leads to a conjecture regarding the first deterministic sub-polynomial update time (1-)-approximate decremental matching algorithm in general graphs. Our main application is obtained by applying our rounding algorithm for general graphs of <Ref> to the O(^-2)-time fractional matching algorithm of <cit.>, yielding the following. thmAMMresults For any >0, there exist dynamic -AMM algorithms that are: * Deterministic, using Õ(^-3·log n) time. * , using Õ(^-3· (loglog n)^2) time, correct w.h.p. * , using Õ(^-3) expected time. In contrast, all prior non-oblivious (1/2-)-approximate matching algorithms had at least quartic dependence on , which the above result improves to cubic. Moreover, this result yields the first deterministic O(log n)-time and adaptive o(log n)-time high-probability algorithms for this widely-studied approximation range and for near-maximal matchings. This nearly concludes a long line of work on deterministic/adaptive dynamic matching algorithms for the (1/2-) approximation regime <cit.>. §.§ Our Approach in a Nutshell Here we outline our approach, focusing on the key ideas behind <Ref>. To better contrast our techniques with those of prior work, we start by briefly overviewing the latter. Previous approaches. Prior dynamic rounding algorithms <cit.> all broadly work by partially rounding the fractional matching to obtain a matching sparsifier S (a sparse subgraph approximately preserving the fractional matching size compared to ). Then, they periodically compute a (1-)-approximate matching in this sparsifier S using a static Õ(|S|·^-1)-time algorithm (e.g., <cit.>) whenever x changes by ·x, i.e., every Ω(·x) updates. This period length guarantees that the matching computed remains a good approximation of the current fractional matching during the period, with as good an approximation ratio as the sparsifier S. Now, for sparsifier S to be O(1)-approximate, it must have size |S|=Ω(x), and so this approach results in an update time of at least Ω(^-2). Known dynamic partial rounding approaches all result in even larger sparsifiers, resulting in large (^-1, log n) update times. Direct to integrality. Our first rounding algorithm for bipartite graphs breaks from this framework and directly rounds to integrality. This avoids overhead of periodic recomputation of static near-maximum matching algorithms, necessary to avoid super-linear-in-^-1 update time (or n^o(1) factors, if we substitute the static approximate algorithms with the breakthrough near-linear-time max-flow algorithm of <cit.>). The key idea is that, by encoding each edge's weight in binary, we can round the fractional matching “bit-by-bit”, deciding for each edge whether to round a component of value 2^-i to a component of value 2^-i+1. This can be done statically in near-linear-time by variants of standard degree splitting algorithms, decreasing the degree of each node in a multigraph by a factor of two (see <Ref>). Letting L:=log((min_e:x_e≠ 0 x_e)^-1), we show that by buffering updates of total value at most O(·x/L) for each power of 2, we can efficiently dynamize this approach, obtaining a dynamic rounding algorithm with update time Õ(^-1· L^2). As we can assume that min_e:x_e≠ 0 x_e≥/n^2 (<Ref>), this gives our bipartite Õ(^-1·log^2n) time algorithm. Faster partial rounding. The second ingredient needed for <Ref> are a number of algorithms for “partially rounding” fractional matchings, increasing min_e:x_e≠ 0 x_e while approximately preserving the value of the fractional matching. (The output is not quite a fractional matching, but in a sense is close to one. See <Ref>.) Our partial rounding algorithms draw on a number of techniques, including fast algorithms for partitioning a fractional matching's support into multiple sparsifiers, as opposed to a single such sparsifier in prior work, and a new sampling data structure of possible independent interest (<Ref>).[From this we derive the first matching algorithm that is not also .]^,[Concurrently to our work, another such sampling algorithm was devised <cit.>. See discussion in <Ref>.] Combining these partial rounding algorithms with our simple algorithm underlies all our bipartite rounding results of <Ref>, as well as our general graph rounding results of <Ref> (with further work – see <Ref>). §.§ Related Work The dynamic matching literature is vast, and so we only briefly discuss it here. For a more detailed discussion, see, e.g., the recent papers <cit.>. The dynamic matching problem has been intensely studied since a seminal paper of Onak and Rubinfeld <cit.>, which showed how to maintain a constant-approximate matching in polylogarithmic time. Results followed in quick succession, including conditional polynomial update time lower bounds for exact maximum matching size <cit.>, and numerous algorithmic results, broadly characterized into two categories: polynomial time/approximation tradeoffs <cit.>, and 1/2- or (1/2-)-approximate algorithms with polylogarithmic or even constant update time <cit.>.[Other works studied approximation of the maximum matching size <cit.>.] We improve the state-of-the-art update times for all deterministic and adaptive algorithms in the intensely-studied second category. The (1-)-approximate matching problem has also been studied in partially dynamic settings. This includes a recent algorithm supporting vertex updates on opposite sides of a bipartite graph, though not edge updates <cit.> (see arXiv). For incremental (edge-insertion-only) settings several algorithms are known <cit.>, the fastest having (^-1) update time <cit.>. In decremental settings (edge-deletion-only), rounding-based algorithms with (^-1,log n) update time in bipartite graphs <cit.> and randomized exp(^-1) in general graphs <cit.> are known. We improve on these decremental results, speeding up bipartite matching, and giving the first deterministic logarithmic-time algorithm for general graphs. §.§ Paper Outline Following some preliminaries in <Ref>, we provide our first simple bipartite rounding algorithm in <Ref>. In <Ref> we introduce the notion of partial roundings that we study and show how such partial rounding algorithms (which we provide and analyze in <Ref>) can be combined with our simple algorithm to obtain the (bipartite) rounding algorithms of <Ref>. In <Ref> we analyze our rounding algorithms when applied to known fractional matchings in general graphs, from which we obtain <Ref>. We conclude with our decremental results in <Ref>. § PRELIMINARIES Assumptions and Model. Throughout, we assume that x≥ 1, as otherwise it is trivial to round x within a factor of 1-, by maintaining a pointer to any edge in x whenever the latter is not empty. In this paper we work in the word RAM model of computation with words of size w:=Θ(log n), allowing us to index any of 2^O(w)=(n) memory addresses, perform arithmetic on w-bit words, and draw w-bit random variables, all in constant time. We will perform all above operations on O(log(^-1· n))-bit words, which is still O(w) provided ^-1=(n). If is much smaller, all stated running times trivially increase by a factor of O(log(^-1)). Notation. For multisets S_1 and S_2, we denote by S_1⊎ S_2 the “union” multiset, in which each element has multiplicity that is the sum of its multiplicities in S_1 and S_2. A vector is λ-uniform if x_e = λ for all e ∈, and is uniform if it is λ-uniform for some λ. Given fractional matching , we call an integral matching M ⊆ that is (1-)-approximate, i.e., |M| ≥· (1 - ) an -approximate rounding of . Finally, we use the following notion of distance and its monotonicity. For vectors ,∈ℝ^E and ≥ 0, define d^ϵ_V(, ) := ∑_v ∈ V (|x(v) - y(v)| - ϵ)^+, for (z)^+:=max(0, z) the positive part of z∈ℝ. Then, we have d^_V(,)≤ d^'_V(,) for all ϵ≥ϵ'. Moreover, by the triangle inequality and the basic fact that (a+b)^+≤ a^+ + b^+ for all real a,b, we have d^_1+_2_V(,)≤ d^_1_V(,) + d^_2_V(,) for all _1,ϵ_2≥ 0 and vectors ,,∈ℝ^E. Support and Binary encoding. We denote the binary encoding of each edge e's fractional value by x_e ∑_i (x_e)_i · 2^-i. We further let _i(){e∈ E | (x_e)_i=1} denote the set of coordinates of whose i-th bit is a 1. So, ()=⋃_i _i(). Next, we let _minmin_e ∈x x_e. The following observation allows us to restrict our attention to a small number of bits when rounding bipartite fractional matchings . (In <Ref> we extend this observation to the structured fractional matchings in general graphs that interest us there.) For rounding bipartite fractional matching, by decreasing by a constant factor, it is without loss of generality that _min≥/n^2 and moreover if Δ≤_min and L:=1+⌈log(^-1Δ^-1)⌉, we may safely assume that (x_e)_i=0 for all i>L. Let '=/3. Consider the vector ' obtained by zeroing out all entries e of with x_e<'/n^2 and setting (x_e)_i=0 for all edges e and indices i>L. Clearly, x'⊆x and ' is a fractional matching, as '≤. The following shows that x' is not much smaller than x≥ 1. x'≥x - n 2'/n^2 - ∑_e∑_i>L 2^-i≥x - ' - ∑_e '·_min≥x· (1-') - ∑_e '· x_e = (1-2')·x. Therefore, a matching M⊆x'⊆x that is (1-')-approximate w.r.t. x' is (1-)-approximate w.r.t. , as |M|≥ (1-')·x'≥ (1-3')·x = (1-)·x. Recourse Lower Bound. We note that the number of changes to M per update (a.k.a. the rounding algorithm's recourse) is at least Ω(^-1) in the worst case. Any (1-)-approximate dynamic matching rounding algorithm must use Ω(^-1) amortized recourse, even in bipartite graphs. Consider a path graph G of odd length 4^-1+2 with values 1/2 assigned to each edge. A matching M⊆x of size |M|≥ (1-)·x must match all odd-indexed edges of the path. However, after invoking (· ,0) for the first and last edges in the path, for |M|≥ (1-)·x (for the new ), M must match all even-indexed edges. Therefore, repeatedly invoking (·, 0) and then (·, 1/2) for these two edges sufficiently many times implies that the matching M maintained by must change by an average of Ω(^-1) edges per update. §.§ The subroutine Throughout the paper, we us the following subroutine to partition a graph into two subgraphs of roughly equal sizes while roughly halving all vertices' degrees. Such subroutines obtained by e.g., computing maximal walks and partitioning them into odd/indexed edges, have appeared in the literature before in various places. For completeness, we provide this algorithm in <Ref>. propdsplitalgo There exists an algorithm , which on multigraph G=(V,E) with maximum edge multiplicity at most two (i.e., no edge has more than two copies) computes in O(|E|) time two (simple) edge-sets E_1 and E_2 of two disjoint sub-graphs of G, such that E_1,E_2 and the degrees d_G(v) and d_i(v) of v in G and H_i:=(V,E_i) satisfy the following. * |E_1|=⌈|E|/2⌉ and |E_2|= ⌊|E|/2⌋. * d_i(v)∈[ d_G(v)/2-1, d_G(v)/2+1] for each vertex v∈ V and i∈{1,2}. * d_i(v)∈[⌊d_G(v)/2⌋, ⌈d_G(v)/2⌉] for each vertex v∈ V and i∈{1,2} if G is bipartite. § SIMPLE ROUNDING FOR BIPARTITE MATCHINGS In this section we use the binary encoding of to approximately round fractional bipartite matchings in a “linear” manner, rounding from the least-significant to most-significant bit of the encoding. We first illustrate this approach in a static setting in <Ref>. This will serve as a warm-up for our first dynamic rounding algorithm provided in <Ref>, which is essentially a dynamic variant of the static algorithm (with its procedure being essentially the static algorithm). §.§ Warm-up: Static Bipartite Rounding In this section, we provide a simple static bipartite rounding algorithm for fractional matchings. Specifically, we prove the following <Ref>, analyzing our rounding algorithm, <Ref>. The algorithm simply considers for all i, E_i := _i(x), i.e., the edges whose i-th bit is set to one in . Starting from F_L=∅, for i=L,…,1, the algorithm applies to the multigraph G[F_i⊎ E_i] and sets F_i-1 to be the first edge-set output by (by induction, E_i,F_i are simple sets, and so G[F_i⊎ E_i] has maximum multiplicity two.) Overloading notation slightly, we denote this by F_i -1(G[F_i⊎ E_i]). The algorithm then outputs E_0∪ F_0. On fractional bipartite matching and error parameter ϵ∈ (0, 1), <Ref> outputs an integral matching M ⊆x with |M| ≥ (1 - ϵ) · in time O(|| ·log (^-1_min^-1)). By <Ref>, L=O(log(^-1· n)), and so <Ref> implies an O(|x|·log (^-1· n)) runtime for <Ref>. We prove this theorem in several steps. Key to our analysis is the following sequence of vectors (which we will soon show are fractional matchings if x is bipartite). Letting F_i(e):=1[e∈ F_i] and E_i(e):=1[e∈ E_i] = (x_e)_i, we define a sequence of vectors ^(i)∈ℝ^E_≥ 0 for i=0,1,…,L as follows. x^(i)_e := F_i(e)· 2^-i + ∑_j = 0^i E_j(e) · 2^-j. So, ^(L)≥ (1-)·, by definition and <Ref>. Moreover, each (copy of) edge e output/discarded by (G[E_i⊎ F_i]) corresponds to adding/subtracting 2^-i to/from x^(i)_e to obtain x^(i-1)_e. This allows us to prove the following lower bound on the size of the output. ^(i)≥ (1-)·x for all i∈{0,1,…,L}. By Property <ref> we have that |F_i-1|≥⌈1/2 (|F_i|+|E_i|)⌉ and so ^(i-1) = ^(i) + 2^1-i·∑_e F_i-1(e) - ∑_e 2^-i· (F_i(e)+E_i(e))≥^(i). Therefore, repeatedly invoking the above bound and appealing to <Ref>, we have that indeed, for all i∈{0,1,…,L}, ^(i) ≥^(i+1)≥…≥^(L)≥ (1-)·x. A simple proof by induction shows that if x is bipartite, then the above procedure preserves all vertices' fractional degree constraints, i.e., the vectors ^(i) are all fractional matchings. If x is a fractional bipartite matching then x^(i)(v) ≤ 1 for every vertex v∈ V and i∈{0,1,…,L}. By reverse induction on i≤ L. The base case holds since ^(L)≤ is a fractional matching. To prove the inductive step for i-1 assuming the inductive hypothesis x^(i)(v)≤ 1, let d_G_i(v)=∑_e∈ v(E_i(e)+F_i(e)) be the number of (possibly parallel) edges incident to v in G_i:=G[E_i⊎ F_i]. By Property <ref>, we have the following upper bound on v's fractional degree under ^(i-1). x^(i-1)(v) ≤ x^(i)(v) - d_G_i(v)· 2^-i + ⌈d_G_i(v)/2⌉· 2^-i+1. If d_G_i(v) is even, then we are done, by the inductive hypothesis giving x^(i-1)(v)≤ x^(i)(v) ≤ 1. Suppose therefore that d_G_i(v) is odd. By <Ref>, any value x^(i)_e is evenly divisible by 2^-i and therefore the same holds for x^(i)(v). By the same token, d_G_i(v) is odd if and only if x^(i)(v) is not evenly divisible by 2^-i+1. However, since x^(i)(v) is evenly divisible by 2^-i and it is at most one, this implies that x^(i)(v)≤ 1-2^-i. Combined with <Ref>, we obtain the desired inequality when d_G_i(v) is odd as well, since x^(i-1)(v) ≤ x^(i)(v) + 2^-i≤ 1-2^-i+2^-i = 1. Now, since the vector ^(0) is integral, the preceding lemmas imply that if is a bipartite fractional matching then M is a large integral matching. If is a fractional bipartite matching then M=E_0∪ F_0⊆x is an integral matching of cardinality at least |M|=|E_0|+|F_0|≥ (1-)·x. By <Ref>, the (binary) vector ^(0) (the characteristic vector of M) is a feasible fractional matching, and so M is indeed a matching. That M⊆x follows from outputting a sub(multi)set of the edges of its input, and therefore a simple proof by induction proves that () ⊇(^(L)) ⊇(^(L-1)) ⊇…⊇(^(0)) = M. The lower bound on |M|=x^(0) then follows from <Ref>. Finally, we bound the algorithm's running time. <Ref> takes time O(|x|· L) when run on vector ∈ℝ^E_≥ 0. To analyze the runtime of the algorithm, note that it runs in time O(L + ∑_i = 0^L (|F_i| + |E_i|)). Further, |F_L| = 0 and by Property <ref> we have that |F_i| ≤1/2 |F_i + 1| + 1/2|E_i + 1| + 1 for all i ∈{0,1,…,L-1}. Letting m |x| we know that |E_i| ≤ m for all i, and so by induction |F_i| ≤1/2|F_i+1|+1/2m + 1 ≤ m + 2 for all j ∈{0, 1, ... , L - 1}. Thus, the algorithm runs in the desired time of O(m L + L) = O(m L). <Ref> follows by combining the two preceding lemmas. We now turn to the dynamic counterpart of <Ref>. §.§ A Simple Dynamic Bipartite Rounding Algorithm In this section we dynamize the preceding warm-up static algorithm, obtaining the following result. <Ref> is a deterministic dynamic bipartite matching rounding algorithm. Under the promise that the dynamic input vector satisfies _min≥δ throughout, its amortized time is O(^-1·log^2(^-1δ^-1)) and its time on vector is O(|x|·log(^-1δ^-1)). Since δ≥/n^2 by <Ref>, <Ref> yields an Õ(^-1·log^2n) update time algorithm. Our dynamic algorithm follows the preceding static approach. For example, its initialization is precisely the static <Ref> (and so the time follows from <Ref>). In particular, the algorithm considers a sequence of graphs G_i:=G[E_i⊎ F_i] and fractional matchings ^(i) defined by G_i and the i most significant bits of x_e, as in <Ref>. However, to allow for low (amortized) update time we allow for a small number of unprocessed changed or deleted edges for each i, denoted by c_i. When such a number c_i becomes large, we rebuild the solution defined by F_i and _i(),…,_0() as in the static algorithm. Formally, our algorithm is given in <Ref>. Conventions and notation. Most of our lemmas concerning <Ref> hold for arbitrary non-negative vectors ∈ℝ^E_≥ 0, a fact that will prove useful in later sections. We state explicitly which lemmas hold if is a fractional bipartite matching. In the analysis of <Ref> we let ^(i) be as defined in <Ref>, but with E_i and F_i of the dynamic algorithm. Furthermore, we prove all structural properties of <Ref> for any time after and any number of operations, and so we avoid stating this in all these lemmas' statements for brevity. Next, we use the shorthand S_i:=_i(), and note that unlike in the static algorithm, due to deletions from E_i before the next (i), the containment E_i⊆ S_i may be strict. First, we prove that M is a matching if is a bipartite fractional matching. More generally, we prove that each ^(i), and in particular ^(0), is a fractional matching, implying the above. If is a fractional bipartite matching, then ^(i) is a fractional matching for all i∈{0,1,…, L}. Fix vertex v, and let F_i(v) and S_i(v) be the number of edges of v in F_i and S_i respectively, for all i∈{0,1,…, L}. To upper bound x^(i)(v), we start by upper bounding F_i(v), as follows. F_i(v) ≤⌈ 2^i·∑_j=i+1^L S_j(v)· 2^-j⌉ . We prove the above by induction on the number of operations and by reverse induction on i∈{0,1,…,L}, as follows. The base case i=L is trivial, as F_L(v)=0 throughout and the RHS is non-negative. Next, for i<L, consider the effect on F_i(v) of an update resulting in a call to (i+1) (e.g., after calling ), at which point E_i+1 S_i+1. F_i(v) ≤⌈1/2· (S_i+1(v) + F_i+1(v)) ⌉ Property <ref> ≤⌈1/2· S_i+1(v) + 1/2·⌈ 2^i+1·∑_j=i+2^L S_j(v)· 2^-j⌉⌉ Inductive hypothesis for i+1 ≤⌈1/2· S_i+1(v) + 1/2· 2^i+1·∑_j=i+2^L S_j(v)· 2^-j⌉ = ⌈ 2^i·∑_j=i+1^L S_j(v)· 2^-j⌉ , where the last inequality follows from the basic fact that for non-negative y,z with y an integer, ⌈1/2· y + 1/2⌈ z ⌉⌉≤⌈1/2(y+ z)⌉. Next, it remains to prove the inductive step for index i and a call to for which (i+1) is not called: but such an update only decreases the left-hand side of <Ref>, while it causes a decrease in the right-hand side (by one) only if an edge of v was updated in this call to , in which case we delete at least one edge incident to v in F_i, if any exist, and so the left-hand side also decreases by one (or is already zero). Finally, combining <Ref> and <Ref>, we obtain the desired inequality x^(i)(v)≤ 1. x^(i)(v) ≤ F_i(v)· 2^-i + ∑_j=0^iS_j(v)· 2^-j≤ 2^-i·⌈ 2^i·∑_j=i+1^L S_j(v)· 2^-j⌉ + ∑_j=0^iS_j(v)· 2^-j≤ 1. Above, the first inequality follows from <Ref> and E_i(v)≤ S_i(v) since E_i⊆ S_i. The second inequality follows from <Ref>. Finally, the final inequality relies on ∑_j S_j(v)· 2^-j = x(v) ≤ 1, together with 2^i·∑_j=i+1^L S_j(v)· 2^-j being fractional if and only if ∑_j=i+1^L S_j(v)· 2^-j is not evenly divisible by 2^-i, though it is evenly divisible by 2^-i-1, in which case x(v)≤ 1-2^-i. The same proof approach, using Property <ref> of (for possibly non-bipartite graphs) implies the global upper bound |F_i| ≤⌈ 2^i·∑_j=i+1^L |S_j|· 2^-j⌉≤ 1 + 2^i·x. Next, we prove the second property of a rounding algorithm, namely that M⊆x. (^(i))⊆x for all i∈{0,1…,L} and therefore M⊆x. We prove the stronger claim by induction on the number of operations of <Ref> and by reverse induction on i∈{0,1,…,L} that (^(i)) = E_i∪ F_i⊆(^(i+1))⊆x. That E_i⊆_i() throughout is immediate, since E_i_i() when (i) is called (and in particular after was called), and subsequently all edges e∈ E_i that are updated (and in particular each edge whose x_e value is set to zero) are removed from E_i. Therefore E_i⊆x throughout, and in particular (^(L)) = E_L ⊆x. Similarly, by the properties of and the inductive hypothesis, we have that after (i) is called, F_i⊆ F_i+1∪ E_i+1⊆(^(i+1))⊆x, and each edge e updated since is subsequently deleted from F_i (as are some additional edges). Therefore F_i⊆x throughout. We conclude that (^(i))⊆x for all i, as desired. We now argue that the unprocessed edges have a negligible effect on values of ^(i) compared to their counterparts obtained by running the static algorithm on the entire input . ^(i)≥ (1-2)·x for every i∈{0,1,…,L}. As in the proof of <Ref>, by Property <ref>, after or any (·,·) triggering a call to (i) we have that |F_i-1| = ⌈1/2 (|F_i|+|E_i|)⌉, and so ^(i-1)≥^(i). On the other hand, between calls to (i) there are at most 2^i-2·x/L calls to (e,ν), resulting in at most 2^i-1·x/L many edges being deleted from F_i-1, which in turn result in ^(i-1) decreasing by at most 2^-(i-1)· 2^i-1·x/L = x/L. In contrast, by <Ref>, any changes in E_j for j≠ i have no effect on ^(i-1)-^(i). On the other hand, until the next (i) is triggered, we have that E_i and F_i can only decrease (contributing to an increase in x^(i-1) - x^(i)); E_i can only decrease since edges are only added to E_i when (i) is called, and F_i only decreases until (i+1) is called, which triggers a call to (i). Therefore, x^(i-1) - x^(i) decreases by at most x/L during updates until the next call to (i), and so after and after every of <Ref>, we have that ^(i-1)≥^(i) - x/L. Invoking the above inequality L times, and using that ^(L)≥ (1-)·x by <Ref>, we obtain the desired inequality. |E_0∪ F_0| = ^(0) ≥^(1) - x/L≥…≥^(L) - L·x/L≥ (1-2)·x. The latter is nearly tight, as ^(i)≤ (1+ϵ)·x + 2^1-i for every i∈{0,1,…,L}. The proof follows that of <Ref>, with the following changes: By Property <ref>, after or any (·,·) triggering a call to (i) we have the upper bound |F_i-1| = ⌈1/2≤ 1+1/2(|F_i|+|E_i|), and so ^(i-1)≤^(i)+2^-i+1. On the other hand, the increase in x^(i-1) - x^(i) until such a (i) is at most ϵx/L (similarly to the decrease in the same). The proof then concludes similarly to that of <Ref>, also using that ^(L)≤x. Finally, we turn to analyzing the algorithm's update time. The (amortized) time per of <Ref> is O(^-1· L^2). By <Ref>, |F_i|≤ 1 + 2^i·x = O(2^i·x) (recalling that without loss of generality x≥ 1). Similarly, trivially |S_i| = 2^i· 2^-i· |S_i|≤ 2^i·x. Therefore, by <Ref>, the calls to (G[E_i⊎ F_i]) in <Ref> (at which point E_i=S_i) take time O(2^i·x), and so the time for (i) is ∑_j=0^i O(2^j·x) = O(2^i·x). But since (i) is called after 2^i-2·x/L updates, its cost amortizes to O(^-1· L) time per update. Summing over all i∈{0,1,…,L}, we find that indeed, the amortized time per operation, which is O(L) (due to deleting O(1) edges from each E_i and F_i for each i) plus its contribution to periodic calls to , is O(^-1· L^2). We are finally ready to prove <Ref>. <Ref> is a dynamic rounding algorithm for bipartite fractional matchings, since M is a matching contained in (_0)⊆(_1)⊆…⊆() if the latter is bipartite, by <Ref> and <Ref>, and moreover |M|=^(0)≥ (1-2)·x, by <Ref>. The algorithm's time and time follow from <Ref> and <Ref>. To (nearly) conclude, this section provides a simple bipartite rounding algorithm with near-optimal -dependence. In the following section we show how partially rounding the fractional matching allows to dynamically guarantee that _min be sub-polynomial in /n, thus allowing us to decrease L and obtain speedups (improved n-dependence) when combined with <Ref>. <Ref> in general graphs. Before continuing to the next section, we mention that the alluded-to notion of partial rounding will also be useful when rounding (well-structured) fractional matchings in general graphs as well (see <Ref>). With this in mind, we provide the following lemma, which is useful to analyze <Ref> when rounding general graph matchings. lemsequentialgeneral For d^c_V(, ) := ∑_v ∈ V (|x(v) - y(v)| - c)^+, the vectors ^(i) satisfy d^2^-i+1_V(,^(i))≤·x ∀ i∈{0,1,…,L}. First, we verify that the inequality holds (with some extra slack) right after (i) (and in particular right after ). First, by Property <ref> of , during the invocation of which E_i=S_i, we have that F_i(v) ∈[1/2(E_i+1(v)+F_i+1(v) - 1, 1/2(E_i+1(v)+F_i+1(v)) + 1]. Therefore, by <Ref>, for each vertex v, we have after (i) that |x^(i)(v)-x^(i+1)(v)|≤ 2^-i. On the other hand, before an with current input there are at most 2^i-2·x/L calls to since the last call to (i), resulting in at most 3· 2^i-2·x/L many edges being added or deleted from E_i∪ F_i. Therefore, during the updates between calls to (i), the total variation distance between ^(i) and ^(i+1) changes by at most x/L, and so after and after any , ∑_v (|x^(i)(v) - x^(i+1)(v)|-2^-i)^+≤x/L. Now, using the basic fact that (a+b)^+ ≤ a^+ + b^+ for all real a,b, summing the above difference over all i, and using that =^(L) by <Ref>, we obtain the desired inequality, as follows. ∑_v (|x^(i)(v) - x^L(v)|-2^-i+1)^+ ≤∑_j=i^L-1∑_v (|x^(i)(v) - x^(i+1)(v)|-2^-i)^+ ≤ L·x/L = x. § PARTIAL ROUNDING: A PATH TO SPEEDUPS So far, we have provided a rounding algorithm with near-optimal dependence on (by <Ref>) and polylogarithmic dependence on _min^-1=(^-1n) of the fractional matching . To speed up our algorithm we thus wish to dynamically maintain a “coarser” fractional matching ' (i.e., with larger ('_min)^-1 than _min^-1) that approximately preserves the value of . The following definition captures this notion of coarser fractional matchings that we will use.[In what follows, we use the definition of d^ϵ_V from <Ref>.] DefsplitDef Vector '∈^E_≥ 0 is an (ϵ, δ)-coarsening of a vector ∈ℝ^E_≥ 0 if: * Containment: x'⊆x. * Global Slack: |x - x'| ≤· +. * Vertex Slack: d^ϵ_V(, ') ≤·x +. * Edge Values: x'_e ∈{0}∪ [δ, 2δ) if x_e < δ and x'_e = x_e otherwise. The coarsening ' is bounded if it also satisfies the following property: * Bounded: x'(v) ≤ x(v) + for all v ∈ V. We briefly motivate the above definition: As we shall see, properties <ref> and <ref> imply that ' (after mild post-processing) is a (1-)-approximation of , and so rounding '≤ results in a (1-)^2≥ (1-2)-approximation of . The less immediately intuitive Property <ref> will also prove useful when rounding in general graphs, in <Ref>. For now, we will use this property when combining coarsenings of disjoint parts of the support of . Property <ref> then allows us to round such coarsening ' efficiently, with only a polylogarithmic dependence on δ^-1, using <Ref> (by <Ref>). Finally, Property <ref> guarantees that '/(1+) is a fractional matching. A key ingredient for subsequent sections is thus a dynamic coarsening algorithm, as follows. A dynamic (,δ)-coarsening algorithm is a data structure supporting the following operations: * (G = (V,E), ∈^E_≥ 0): initializes the data structure for undirected graph G with vertices V and edges E, current vector . * (e ∈ E, ν∈ [0,1]): sets x_e ν. The algorithm must maintain an (,δ)-coarsening ' of (the current) . As we show in <Ref>, the internal state of <Ref> yields a dynamic coarsening algorithm. In this section we state bounds for a number of dynamic coarsening algorithms (analyzed in <Ref>), with the objective of using their output as the input of <Ref>, from which we obtain faster dynamic bipartite rounding algorithms than when using the latter algorithm in isolation. The following lemma, proved in <Ref>, captures the benefit of this approach. lemcoarsentoround (From coarsening to rounding). Let be a dynamic (ϵ,δ)-coarsening algorithm with time t^_U := t^_U(, δ, n) and time O(|| · t^_I). Let be a dynamic rounding algorithm for fractional matchings with _min≥δ, with time t^_U := t^_U(,δ,n) and time O(|x|· t^_I), for t^_I := t^_I(,δ,n). Then, there exists an O(ϵ + δ)-approximate dynamic rounding algorithm ^* with time O(t_U^ + t^_U + ^-1· t^_I) and time O(|| · (t^_I + t^_I)) which is deterministic//if both and are. In our invocations of <Ref> we will use <Ref> to play the role of Algorithm . In <Ref> we provide a number of coarsening algorithm, whose properties we state in this section, together with the obtained rounding algorithms' guarantees. A number of our coarsening algorithms will make use of subroutines for splitting (most of) the fractional matching's support into numerous disjoint coarsenings, as in the following. An (ϵ, δ)-split of fractional matching ∈ℝ_≥ 0^E with _max≤δ consists of (ϵ, δ)-coarsenings ^(1),…,^(k) with disjoint supports, together covering at least half of z, i.e., ∑_i |z^(i)|≥1/2· |z|. The following lemma combined with <Ref> motivates our interest in such splits. lemsplittocoarsen (From static splitting to dynamic coarsening). Let be a static (γ, δ)-split algorithm with running time |x| · t_s on uniform fractional matching , where t_s := t_s(n,γ,δ). Then there exists a dynamic algorithm which for any (possibly non-uniform) fractional matching maintains an (O( + γ·^-1· (log (γ^-1· n)), δ)-coarsening of with time O(^-1· t_s) and time O(|x| · t_s). Algorithm is deterministic//if is. Section outline. We prove <Ref> in <Ref>, respectively. Before that, we state bounds of a number of such partial rounding algorithms (presented and analyzed in <Ref>), together with the rounding algorithms we obtain from these, yielding <Ref>. §.§ Partial Rounding Algorithms, with Applications Here we state the properties of our coarsening and splitting algorithms presented in <Ref>, together with the implications to dynamic rounding, as stated in <Ref>. In <Ref>, we provide a deterministic static split algorithm as stated in the following lemma. lemdetsplit For any > 0, there exists a deterministic static (4 , )-split algorithm which on input uniform fractional matchings runs in time O(|x| ·log(ϵ^-1· n)). Combining the above lemma with <Ref> yields the first result of <Ref>. There exists a deterministic dynamic bipartite rounding algorithm with time O(^-1· t(,n)) and time O(|x|· t(,n)), for t(,n) = log n + log^2(^-1). By <Ref>, there exists a deterministic static (4 ^3/log n, ^3/log n)-split algorithm that on uniform fractional matching runs in O(|| ·log(^-1· n)) time. Plugging this algorithm into Lemma <ref> yields a deterministic dynamic (O(), ^3/log n )-coarsening algorithm with time t^_U = O(^-1· (log n)) and initialization time O(|| ·log n). Moreover, by <Ref> there exists a deterministic dynamic bipartite matching rounding algorithm for fractional bipartite matchings with _min=Ω((^-1·log n)) with time t^_U = O(^-1·log^2(^-1·log n)) and time O(|x| · t^_I), for t^_I= O(log(^-1·log n)). Plugging these algorithms into <Ref>, we obtain a deterministic algorithm which has time O(t^_U + t^_U + t^_I ·^-1) = O(^-1· (log n + log^2(^-1·log n))) = O(^-1· (log n + log^2(^-1)), and initialization time O(|| · (log n + log^2 (^-1)). The last equality holds for all ranges of n and , whether ^-1=O(log n) or ^-1=Ω(log n). Next, in <Ref> we provide a simple linear-time subsampling-based randomized split algorithm with the following properties. lemrandsplitwhp For any > 0, there exists a static randomized algorithm that on uniform fractional matchings computes an (, ^4/24log^2 n)-split in O(|x|)-time, and succeeds w.h.p. Combining the above lemma with <ref> yields the w.h.p. result of <Ref>. There exists an adaptive dynamic bipartite rounding algorithm that succeeds w.h.p., with time O(^-1· t(,n)) and time O(|x|· t(,n)), for t(,n)=log^2 log n+log^2(^-1). By Lemma <ref>, there exists a randomized static algorithm that computes a (^3/log(n), ^12/24 ·log^6(n))-split of any uniform fractional matching in O(||) time, succeeding w.h.p. Plugging this algorithm into Lemma <ref> we obtain a randomized (with high probability) dynamic (O(), ^12/24 ·log^6(n))-coarsening algorithm with time t^_U = O(^-1) and time O(||). On the other hand, by <Ref>, there exists a deterministic dynamic bipartite matching rounding algorithm for fractional matchings with _min=Ω((^-1·log n)) with time t^_U = O(^-1·log^2(log n ·^-1)) and time O(|x| · t^_I), for t^_I= O(log(log n ·^-1)). Plugging these algorithms into <Ref>, we obtain a randomized algorithm which works with high probability and has time O(t^_U + t^_U + t^_I ·^-1) = O(^-1·log^2(log n ·^-1)) = O(^-1·(log^2log n + log^2(^-1))), and time O(|| ·log^2(log n) + log^2 (^-1)). The last equality holds whether ^-1=O(log n) or ^-1=Ω(log n). Finally, in <Ref>, building on a dynamic set sampling algorithm which we provide in <Ref>, we give an coarsening algorithm with constant (and in particular independent of n) expected amortized time. lemrandsplitexp There exists an dynamic (O(), ^3)-coarsening algorithm for dynamic fractional matchings with expected time O(^-1) and expected time O(||). Finally, combining the above lemma with <Ref> yields the third result of <Ref>. There exists an dynamic bipartite rounding algorithm with expected time O(^-1· t()) and expected time O(|| · t()) for t() = log^2(^-1). By <Ref>, there exists a dynamic (,O(^3))-coarsening algorithm with expected time t^_U=O(1) and time O(||). On the other hand, by <Ref> there exists a deterministic (hence ) dynamic bipartite matching rounding algorithm for fractional matchings with _min=Ω(^3) with time t^_U = O(^-1·log^2(^-1)) and time O(|x| · t^_I), for t^_I= O(log(^-1)). Plugging these algorithms into <Ref>, we obtain an algorithm with expected time O(t^_U + t^_U + t^_I ·^-1) = O(^-1·log^2(^-1)) and expected time O(|| ·log^2(^-1)). §.§ Proof of Lemma <ref>: Reducing Rounding to Coarsening The following lemma allows us to efficiently convert coarsenings to bounded coarsenings. There exists a deterministic algorithm which given an (,δ)-coarsening ' of fractional matching , finds in O(|'|) time a bounded (3 ( + δ), δ)-coarsening ” of , with ”_e=0 only if x_e < δ. The algorithm: Initialize ”←'. For any vertex v ∈ V such that x”(v) > x(v) + ϵ + 2δ, remove arbitrary edges e incident on v in ” such that x_e ≤δ until x”(v) ≤ x(v) + ϵ + 2δ. Note that edges e with x_e≥δ have x'_e=x_e, and so the above process must terminate as once all edges with weight at most δ are removed for all vertices v ∈ V we have x_v”≤ x_v. Finally, return ”. Running time: Each edge of ” has its value decreased at most once, hence the running time of the algorithm is at most O(|'|). Correctness: By construction and by Property <ref> of coarsening ' of , we have that ”⊆'⊆, and so ” satisfies Property <ref> of a coarsening of . Next, since ' is an (,δ)-coarsening of ' and ” agrees with ' on x”, we find that ” also satisfies Property <ref>. Moreover, by definition, the algorithm ensures that x”(v) ≤ x(v) + ϵ + 2δ for any vertex v ∈ V, and so ' satisfies Property <ref> of bounded (3(+δ),δ)-coarsenings of . To prove the remaining properties we leverage some minor calculations, which we now turn to. Consider some vertex v that had one of its edges in ” deleted. Since before the deletion it must be that x”(v) > x(v) + ϵ + 2δ and the deleted edge had weight at most δ, we must have that after the deletion x”(v) ≥ x(v) + ϵ. This implies that the total weight of deleted edges is upper bounded by d^ϵ_V(, ') ≤· +, as ' is an (, δ)-coarsening of . Hence, |' - ”| ≤· +. Note that | - '| ≤· as ' is an (, δ)-coarsening of . Thus, ” satisfies Property <ref> of (3( + δ), δ)-coarsenings of : |” - | ≤ |' - | + |' - ”| ≤· 2 · + 2 . By Property <ref>, d_V^(, ') ≤· +. Therefore, by <Ref> and the above, we find that ” also satisfies Property <ref> of (3( + δ), δ)-coarsenings of . d_V^3( + δ)(, ”) ≤ d_V^(, ') + d_V^0(',”) ≤ d_V^(, ') + 2 · |” - '| ≤· 3 · + 3. With Claim <ref> established, we are now ready to prove <ref>. We will describe the algorithm ^*. The input dynamic fractional matching of ^* is denoted by . Recall that x_≤ 2δ and x_> 2δ refer to restricted to edges with weight at most 2δ and greater then 2δ respectively. Algorithm ^* uses to always maintain fractional matching ^A^* an (, 2δ)-coarsening of . Define scaling constant α = (1 + 3( + 2δ))^-1, and assume α≥ 1/2. In addition ^* completes the following operations: Initialization: ^* uses the algorithm of <Ref> to obtain a bounded (3·(+2δ), 2δ)-coarsening ^A^*_N of x from ^A^*. ^* sets _Small←^A^*_N restricted to edges of x_≤ 2δ and _Large←^A^*_N restricted to edges of (_>2δ). The algorithm initializes its output ' by providing = (_Small + _Large)·α as input to . We will maintain throughout the algorithm, that = (_Small + _Large)·α and that ' is the output of when given input fractional matching . The algorithm furthermore sets counter C ← 0. Let x̂_0 stand for the state of x̂ at initialization. Handling an edge update: For the sake of simplicity, we assume every update either changes the weight of some edge e=(u,v) from 0 to some positive value or vice versa, thus inserting or deleting an edge to/from . If x_e > 2δ, algorithm ^* removes e (if present) from (_Large) or adds e to (_Large) with weight x_e and then updates ' using . Note that by assumption α≥ 1/2, hence if e was inserted into then x̂ undergoes an edge update with weight at least 2δ·α≥δ and hence can handle this update correctly. If x_e ≤ 2 δ, the algorithm removes one arbitrary edge incident on u and v from (x_Small) (if there is any) and administer the changes this makes on x̂ and ' using . Furthermore, if e was deleted from , then remove e from (_Small) and update ' with . If e was inserted then its effect (apart from the updates to its endpoints) is ignored. Either way, set C ← C + 12 ·δ and re-initialize the datastrucrure if C > _0·. Running Time: First note that at initialization the algorithm first needs to initialize ^^* on _≤ 2δ which takes O(t_I^· ||) time. Afterwards ^* constructs fractional matching ^^*_N in O(||) time. Finally, the algorithm needs to run on the fractional matching which takes O(t_I^· ||) time. || = |(_Small)| + |(_Large)| where (_Large) = (_ > 2δ) and (_Small) ⊆(_≤ 2 δ). Hence, initialization takes time O(|| · (t_I^+ t_I^)). The fact that the algorithm maintains an integral matching M⊆ of expected size at least |M|≥· (1-O(ϵ+δ)) follows from Claim <ref> and the properties of ϵ-rounding algorithms. Observe that unless the algorithm re-initializes after an update, it only uses to handle O(1) updates on the support of , which takes update time O(t^_U). The algorithm has to run on the dynamic fractional matching at all times which adds an update time of t^_U. Furthermore, the algorithm sometimes re-initializes. Consider the cost of initialization at time 0, which took O(|_0| · t^_U) time. As has edge weights at least 2δ·α≥δ we know that |_0| = O(x̂_0 / δ). By the next initialization C>_0· hence at least Ω(_0· /δ) updates must have occurred. By Claim <ref> we have that = Θ(). Amortizing the cost of the initialization over these updates yields an additional expected update time cost of O(t^_U ·^-1). Hence, the total expected amortized update time of the algorithm is O(t^_U + t^_U + t^_I ·^-1). Adaptivity: Note that other than the inner operations of and , the algorithm's actions are deterministic, and hence it is deterministic//if both and are. At all times ≥· (1 - O(ϵ+δ)) and is a valid fractional matching. We will first argue that remains a valid fractional throughout the set of updates. By definition throughout the run of the algorithm = (_Small + _Large) ·α. Fix some vertex v ∈ V. As x_Large(v) = x_> 2δ(v) and is a valid fractional matching it is sufficient to argue that x_Small(v) ≤ x_≤ 2δ(v) + 3 · ( + 2δ) at all times. Note that this holds at initialization due to Property <ref> of bounded coarsenings. Assume an edge update occurs to _≤ 2δ to edge (u,v). As all edges of _≤ 2δ have weight at most 2δ this update may decrease x_Small(u) and x_Small(u) by at most 2δ. The algorithm compensates for this through deleting an edge incident on both u and v if any are present in (_Small) (which all have weight at least 2δ in _Small). Hence for any edge e in (_Small) we have that x_Small(e) ≥ 2δ after these deletions x_Small(u) ≤ x_≤ 2δ(u) + 3 · ( + 2δ) and x_Small(v) ≤ x_≤ 2δ(v) + 3 · ( + 2δ) hence x̂ remains a valid fractional matching. It remains to argue that ≥· (1 - O(ϵ+δ)). Observe, that at initialization the inequality holds (as _Small is a (3(ϵ + 2δ), δ)-coarsening of _≤ 2δ) and _> 2δ≤_Large / α. As edges of _Small are in [2δ, 4δ] we also must have that by definition (_Small + C) ·α≥_≤ 2δ throughout the run of the algorithm as C is increased by 4 δ whenever an edge update occurs to . As whenever C > _0· we re-initialize we must have that ≥· (1 - O(ϵ + δ)) at all times. This concludes the proof of <ref>. §.§ Proof of Lemma <ref>: Reducing Dynamic Coarsening to Static Splitting We start with a lemma concerning the “stability” of coarsenings under (few) updates. Let ^(0) and ^(t) be two fractional matchings with maximum edge weights δ which differ on at most ^(0)··δ^-1 many edges. Assume that ^(0)' is a (γ, δ)-coarsening of ^(0). Define ^(t)' to be ' restricted to edges of ^(t). Then ^(t)' is a (30 + 2γ, δ)-coarsening of ^(t). Property <ref> of coarsenings follows trivially as ^(t)' is restricted to ^(0) by definition. As ' is an (γ, δ)-coarsening of ^(0) we are guaranteed by Property <ref> that edges of ' take weight in [δ, 2δ), implying Property <ref>. First note that as ^(0) and ^(t) differ on at most ^(0)·δ^-1· edges. Each edge they differ on takes weight at most δ. This first implies that ^(0)≤^(t)(1+2). Furthermore, summing the difference over all vertices implies that d_V^0(^(0), ^(0)) ≤^(0)· 6 ·. Similarly, ^(0)' and ^(t)' may only differ on at most ^(0)·δ^-1· edges. On each of these edges they both take values in [δ, 2δ). Hence, d_V^0(^(0), ^(t)) ≤^(0)· 4. To conclude Property <ref> of coarsenings consider the following line of inequalities: |^(t) - ^(t)'| ≤ 2 · d_V^0(^(0), ^(t)) + | ^(0) - ^(0)'| + 2 · d_V^0(^(0)', ^(0)') ≤x^(0)· (γ + · (12 + 8)) + γ ≤x^(t)· (γ· 2 + · 30) + γ. Inequality <ref> follows from the definitions of the distance d^_V and of norms. Inequality <ref> follows from the observation that ^(0)≤^(t)(1+2). Property <ref> follows from a similar set of inequalities: d_V^30γ(^(t), ^(t)') ≤ d_V^0(^(0), ^(t)) + d_V^γ(^(0),^(0)') + d_V^0(^(0)', ^(0)') ≤x^(0)· (γ + · (6 + 4)) + γ ≤x^(t)· (2 γ + 30 ) + γ. The statement of <Ref> holds assuming |^(0)| ·γ and |^(t)| ·γ are uniform fractional matchings (of the same uniform value) and they differ on at most |^(0)| ·γ edges while ^(0)' and ^(t)' differ on at most |^(t)'| edges. Furthermore, ^(t)' satisfies slightly stronger slack properties, | - '| ≤x· (2γ + 30) + γ and d_V^γ(, ') ≤· (2γ + 30) + γ. To observe Corollary <ref> note that all the inequalities in the proof of Lemma <ref> hold in this slightly modified setting. In Claim <ref>, we show that if the input fractional matching is uniform, then we can maintain a coarsening of efficiently as it undergoes updates. Afterwards we show how to extend the argument to general fractional matchings. Let be a static (γ, δ)-split algorithm that on uniform fractional matching takes time O(|x| · t_s), for t_s = t_s(n,γ,δ). Then there exists a dynamic (ϵ+γ, δ)-coarsening algorithm _U for uniform fractional matchings, whose output (ϵ+γ, δ)-coarsening x' of x satisfies slightly stronger slack properties, namely: * Stronger Global Slack: | - '| ≤x· ( + γ) + γ. * Stronger Vertex Slack: d_V^γ/4(, ') ≤· ( + γ) + γ. Algorithm _U has time O(^-1· t_s) and time O(|x|· t_s), and is deterministic/adaptive if is deterministic/randomized. The algorithm _U works as follows. Initialization: _U calls to compute a (γ/4,δ)-split _1, _2, … of . Next, _U sets its output ' to be _1. Let ^(0) and ^(0)_i respectively denote the states of and ^i, at initialization. Note that this implies an time of O(|| · t_s). Handling an Update: If an edge e gets deleted from x, then _U removes e from ' as well (provided we had e ∈'). Once |^(0)| ·/64 updates have occurred to x since the last initialization, _U re-initializes. Further, once more than |'| ·/32 edges have been deleted from x', _U discards x' from memory and switches its output to be another coarsening of the split which satisfies that at most an /8 fraction of its support has been deleted so far. The effects of insertions are ignored, except that they contribute to the counter timing the next re-initialization. Update Time: Observe that the algorithm re-initializes every Ω(· |^(0)|) updates. Hence, the re-initializations have an amortized update time of O(t_s ·^-1). If at some point in time, for some coarsening ^i at least |^(0)_i| ·/32 edges have been deleted from ^(0)_i, then ^i can be discarded from memory as it will never again enter the output in future. Hence, _U adds/removes each edge of the initially computed split exactly once to/from the output. Accordingly, the total work of _U between two re-initializations is proportional to O(|∪_i ^(0)_i|), which is at most O(|x^(0)|) by definition. We similarly amortize this cost over Ω(· |x^(0)|) updates, which leads to an update time of O(^-1). Observe that at all times ' is a (γ/4, δ)-coarsening of -s state at the last re-initialization restricted to edges which have not been deleted since the last re-initialization. This implies both Properties <ref> and <ref>. Correctness: Let ^(t) denote for the state of at some time t after initialization (but before the next re-initialization). ^(t) and ^(0) may differ on at most |^(0)| ·/64 edges. Suppose that the coarsening ' is the output of the algorithm at time t, and let ^(0)' and ^(t)' respectively denote the states of the coarsening x' at initialization and at time t. Since the algorithm would have discarded ' from memory if more than |^(0)'| ·/32 edges of ' got deleted we know that ^(0)' and ^(t)' differ on at most |^(0)'| ·/32 edges. Based on this the correctness of the output of the algorithm follows from <Ref>. It now remains to argue that _U always maintains an output, i.e., the algorithm doesn't discard all coarsenings present in the initial split before re-initialization. By definition of splits, we know that |⋃_i ^(0)_i| ≥ |^(0)| / 2, and the supports of {^(0)_i} are disjoint. The algorithm re-initializes after |^(0)| · / 64 updates. Hence, less then a /32 fraction of ⋃_i ^(0)_i gets deleted between successive re-initializations. Thus, there is always a coarsening in the split that has less then /32 fraction deleted edges (due to the pigeonhole principle), and so _U always maintains a correct output throughout. Proof of Lemma <ref>: We will now show how to extend Claim <ref> to non-uniform fractional matching and thus prove Lemma <ref>. Assume that x is a fractional matching. Let ϕ = γ/n^2. For the sake of convenience assume that δ = ϕ· (1+ϵ)^L for some integer L < 2 (log n + log(γ^-1))·^-1. Define x_i = x^≥ϕ· (1+)^L-1_<ϕ· (1+)^L. For i ∈ [L] define fractional matching x̂_i to be a uniform fractional matching with weight ϕ· (1+ϵ)^L-1 and support x_i. Define x̂ = ∑_i ∈ [L]x̂_i. First note that as the graph has at most n^2 edges x_<ϕ≤γ. Furthermore, for any edge e ∈x_≥ϕ we must have that x_e ≤x̂_e · (1 + ). Hence, |x̂ - x| ≤·x + γ. Algorithm will use algorithm _U from Claim <ref> to maintain ( + γ, δ)-coarsenings _i' of fractional matchings _i in parallel. The output of will be ' = ∑_i ∈ L_i' + _> δ. In order to conclude the amortized update time bound observe that an edge update to G may only effect a single fractional matching _i. Note that at all times ' is a subset of the support of one of the initially calculated coarsenings restricted to non-deleted edges, hence '⊆. Note that ' and agrees on edges e where (e) > δ. We now turn to proving the remaining properties of a coarsening: Property <ref> Global Slack: By Claim <ref>, we must have that |_i- _i'| ≤_i· ( + γ) + γ. x- x'| ≤x_<ϕ + ∑_i ∈ [L] |x̂_i- x'| + |x̂_i- x_i| ≤γ + ∑_i ∈ [L]x̂_i· ( + γ) + γ + x_i· ≤x· 4( + γ) + γ· 2L ≤x· 8( + γ) + γ· 4log(n/γ) ·^-1. Property <ref> Vertex Slack: d_V^0(x, x̂) ≤ 2 ·γ + 2·x as the two matching differ only on the edges of x_< ϕ or by just a factor of . d_V^ + γ(x, x') ≤ d_V^0(x, x̂) + d_V^ + γ(x̂, x') ≤ 2 ·γ + 2·x + ∑_i ∈ [L] d_V^ + γ(x̂_i, x_i') ≤ 2 ·γ + 2·x + ∑_i ∈ [L] 2·x̂_i· (γ + ) + γ ≤x· 4 · ( + γ) + γ·5 ·log(n/γ)/. Property <ref> Edge values: The property follows as the output is the sum of edge-wise disjoint ( + γ, δ)-coarsenings. § COARSENING AND SPLITTING ALGORITHMS So far, we have provided <Ref> which give a reduction from (faster) dynamic rounding to dynamic coarsening, and from dynamic coarsening to static splitting. We further stated a number of such dynamic coarsening and static splitting algorithms, as well as their corollaries for faster rounding algorithms. In this section we substantiate and analyze these stated coarsening and splitting algorithms. §.§ Deterministic Static Splitting In this section we prove <Ref>, restated below for ease of reference. * Assume is λ-uniform. First note that if λ > then we may return {} as the split trivially. On the other extreme end, if λ≤^2 · n^-2 then we may return a split consisting of || many -uniform fractional matchings each having the support of a single edge in (the properties of coarsening follow trivially). Hence, we may assume that ^2 · n^-2≤λ≤. Let ϵ' ∈ [ϵ,2ϵ] be some constant such that λ· 2^L = ϵ' for an integer L= O(^-1·log n ). The algorithm: Our algorithm inductively constructs sets of vectors _i for i ∈{0,1,…,L}. As our base case, we let _0 := {}. Next, for any vector ' ∈_i-1, using (See <Ref>) on ', we compute two edge sets E^1_', E^2_', and add to _i a λ· 2^i-uniform vector on each of these two edge sets. The algorithm outputs _L. Running time: By <Ref>, each edge in x belongs to the support of exactly one vector in each _i and so each _i is found in time O(|x|) based on _i-1. Therefore, the algorithm runs in the claimed O(|| · L) = O(|| ·log(^-1log n) time. Correctness: It remains to show that _L is a (4ϵ, ϵ)-split of . First, as noted above, each edge in x belongs to the support of precisely one vector in _L, and so these vectors individually satisfy Property <ref> of coarsenings, and together they satisfy the covering property of splits. We now show that every vector _L∈_L satisfies the remaining properties of (4ϵ,ϵ)-coarsening of . For i ∈{1,…, L-1}, inductively define ^i-1 to be the fractional matching in _i-1 that was split using to generate ^i (hence x_0 =). By Property <ref> of <Ref>: |(^i)| ∈[|(^i-1)|/2 - 1, |(^i-1)|/2 + 1] for i ∈{1,2,…,L}. Thus, as ^i and ^i-1 are respectively λ·2^i and λ· 2^i-1-uniform vectors, |^i - ^i-1| ≤λ· 2^i. Therefore, Property <ref> follows form the triangle inequality, as follows. | - ^L| = |^0 - ^L| ≤∑_i ∈ [L]|^i - ^i-1| ≤∑_i ∈ [L]λ· 2^i ≤ 2'≤ 4 ϵ. Similarly, this time by Property <ref> of <Ref>, we have that |x(v)-x^L(v)| ≤ 4 for each vertex v∈ V. Therefore, d^4_V(,^L)=∑_v(x(v)-x^L(v)-4)^+ = 0, and so ^L satisfies Property <ref> of a (4,)-coarsening of . Finally, ^L is an ϵ'-uniform fractional matching by definition, with '∈ [,2], and thus satisfies Property <ref> of (4,)-coarsenings of . We conclude that _L is a (4,)-split of . §.§ Randomized Static Splitting We now turn to proving <Ref>, restated below for ease of reference. * Assume is λ-uniform. First note that if λ > ^4/24 ·log^2 n then we may return {} as the split trivially. Note that if λ≤/n^2 then ≤. In this case, it is sufficient to return a split consisting of || coarsenings, where each coarsening is a single edge of and has weight ^4/log^2 n. The same split can also be returned if ≤^2/log n. Thus, from now on we assume that ^4/24 ·log^2 n≥λ≥/n^2 and ≥^2/log n. We start by defining the following two parameters. δ:=^4/24log^2 n and k:=2^⌈log_2δ/λ⌉. The algorithm: We (implicitly) initialize k zero vectors ^(1),…,^(k). Next, for each edge e∈, we roll a k-sided die i ∼([k]), and set x^(i)_e δ':=λ· k. Running time: The algorithm spends O(1) time per edge in x, and so it trivially takes O(|x|) time. Correctness: By construction, {(^(i))}_i=1^k is a partition of , and so each of these ^(i) satisfy Property <ref> of coarsenings. Moreover, since δ/λ≤ k≤ 2·δ/λ, we have that δ'=k·λ∈ [δ,2δ]. Hence, each of the vectors ^(i) satisfy Property <ref> of an (,δ)-coarsening. It remains to prove the two other properties of a coarsening. For each edge e∈ and i∈ [k], we have x^(i)_e ∼δ'·(1/k), and so [x^(i)_e] = δ'/k = λ. Thus, by linearity of expectation, we get: [^(i)]=]. Consequently, since ≥^2/log n, by standard Chernoff bound and our choices of δ=^4/24log^2 n and δ'≤ 2δ, we have that Property <ref> holds w.h.p. [| ^(i) - | ≥·] ≤ 2exp(-^2 ·/3δ') ≤ 2 n^-4. Similarly, [x^(i)(v) ] = x(v) for each vertex v, and so again a Chernoff bound implies that: [|x^(i)(v) - x(v)| ≥ ] ≤ 2exp(-^2 · 1/3δ') ≤ 2n^-4. Therefore, by taking a union bound over the (n+2) bad events, for each of the k ≤2δ/λ≤2^4/24log^2 n·n^2/≤ 2n^2 vectors (i.e., O(n^3) bad events), we have that d^_V(,^(i))=0. Therefore Property <ref> also holds w.h.p., and so by union bound {^(i)} satisfy all the properties of an (,δ)-split w.h.p. §.§ Output-Adaptive Dynamic Coarsening In this section we prove <Ref>, using the following kind of set sampling data structure. Defsetsampler A dynamic set sampler is a data structure supporting the following operations: * (n,p∈[0,1]^n): initialize the data structure for n-size set S and probability vector p. * (i∈[n],α∈[0,1]): set p_iα. * (): return T⊆^n containing each i∈ S independently with probability p_i. <Ref> shows that there exists a dynamic set sampler with optimal properties. Concurrently to our work, another algorithm with similar guarantees was given by <cit.>. As that work did not address the adaptivity of their algorithm, we present our (somewhat simpler) algorithm and its analysis, in <Ref>. thmthmsetsampler <Ref> is a set sampler data structure using O(n) space that implements in O(n) time, in O(1) time, and T=() in expected O(1+|T|) time in a word RAM model with word size w=Ω(log(p^-1_min)), under the promise that p_i≥ p_min for all i∈ [n] throughout. These guarantees hold even if the input is chosen by an adversary. Equipped with <Ref>, we are ready to prove <Ref>. * Initialization: The algorithm maintains a set sampler as in <Ref>, using the algorithm of <Ref> over n 2 elements. Each element with non-zero probability corresponds to an edge e ∈(_≤^3), and receives a probability of x_e ·^-3 within the sampler. The algorithm initializes counter C ← 0. Afterwards the algorithm draws a sample E from its set sampler and defines its output ' to have support E ∪(_>^3) and take weight ^3 on edges of E and x(e) for edges e ∈(_>^3). The algorithm repeats this process until _≤^3' is an (100 ·,^3)-coarsening of _≤^3. Define ^(0) to stand for the state of at the last initialization. Handling updates: If an update occurs to _>^3 the algorithm simply updates ' accordingly. If an update occurs to edge e ∈_≤^3 then the algorithm first updates e-s weight within the set-sampler appropriately. If e was deleted from (or equivalently x_e was set to 0) it is deleted from '. If e was inserted into (or equivalently x_e was set to be some positive value from 0) it is ignored. The algorithm increases the counter C to C + ^3. If C reaches ^(0)· the algorithm re-initializes ' (through repeatedly sampling E from its sampler until _≤^3' is a (100, ^3)-coarsening of _≤^3) and resets C to 0. Correctness: First, we note that as ' = _≤^3' + _> ^3, then if _≤^3' is an (O(), ^3)-coarsening of _≤^3, then ' is an (O(), ^3)-coarsening of . Thus, we only need to argue that _≤^3' remains an (O(), ^3)-coarsening of _≤^3 throughout the run of the algorithm. This follows from Lemma <ref> as between two re-initializations at most ^(0)··^-3 updates occur. Update Time: Observe that apart from the cost of re-initialization steps, the algorithm takes O(1) time to update its output. During a re-initialization the algorithm has to repeatedly draw edge sets from its sampler. Note that _> ^3' remains unaffected during this process. Whenever the algorithm draws an edge sample |E| by definition we have that [|E| ·^3] = _≤^3. We will later show that the algorithm only needs to draw samples O(1) times in expectation. As re-initialization occurs after Ω(_≤^3·^-2) updates and each sample edge set is drawn in linear time with respect to its size we receive that the algorithm has O(^-1) expected amortized update time. It remains to show that at each re-initialization the algorithm makes O(1) calls to its sampler in expectation. We will show this by arguing that every time a sample is drawn _≤^3' is an (100, ^3)-coarsening of _≤^3 with constant probability. Note that the algorithm can triviarly check if this is indeed the case in O(|(_≤^3)|) time. First note that Property <ref> and Property <ref> (of coarsenings) follow trivially from the definition of '. We will first argue that Property <ref> holds with constant probability at each attempt. This will aslo imply that the algorithm has time O(||) in expectation. For any e ∈ let X_e stand for the indicator variable of event that e ∈ E (where E stands for the random edge sample drawn by the algorithm), and let X̅ = ∑ X_e. Note that X_e are independently distributed random variables and [X̅] = _≤^3·^-3 by definition. Furthermore, _≤^3' = X̅·^3. Using standard Chernoff bounds we receive the following: [|_≤^3 - _≤^3'| ≤ 100 ··_≤^3] = 1 - [|X̅ - [X̅]| ≥[X̅] · 100 ·] ≤ 1 - O(exp(-(100 ·)^2 ·[X̅]/3)). Given [X̅] ≥^-2/8 (or equivalently _≤^3≥/8), the probability in the last inequality is Ω(1). If, conversely, [X̅] ≤^-2/8 (or equivalently _≤^3≤/8), then by a simple Markov's inequality argument we get the following: [|_≤^3 - _≤^3'| ≥] ≤[_≤^3'≥7/8] ≤[_≤^3'] · 7/8≤1/2. Either way, with constant probability |_≤^3 - _≤^3'| ≤_≤^3· +, implying Property <ref> holds with constant probability. It remains to prove Property <ref>. Let random variable Y_v stand for d_{v}^100(_≤^3, _≤^3') and Y̅ stand for d_V^100(_≤^3, _≤^3') = ∑ Y_v. We will first show that [Y_v] ≤ 10 ·_≤^3(v). Summing over all vertices this yields that [Y̅] ≤_≤^3· 20 ·. By a simple Markov's role based argument we can conclude from there that [d_V^100(_≤^3, _≤^3') ≤ 100 ··_≤^3] = Ω(1). Consider any edge e ∈ E incident on v. Let y_e be the indicator variable of the event that e was sampled by the dynamic sampler on query. Hence, we have [y_e] = x_e ·^-3≤ 1, and y_e are independently distributed random variables. Let y_v = ∑_e : v ∈ e y_e. It follows that [y_v] = x_≤^3(v)/^3 and y_v = x_≤^3'(v)/^3. First, assume that ^i≤ x_≤^3(v) ≤^i-1 for some i ≥ 3. Then applying Chernoff bounds yields the following: [|x_≤^3(v) - x_≤^3'(v)| ≥· k] ≤[x_≤^3'(v) - x_≤^3(v) ≥· k] ≤[y_v ≥[y_v] · (1 + (^2-i-1) · (k-1/2))] ≤exp(-((^2-i-1) · (k-1/2))^2·[y_v]/1 + ^2-i) ≤10 ·/k^3. Next, if ^2 ≤ x_≤^3(v) and k ≤ 1/, then a similar application of Chernoff bound gives: [|x_≤^3(v) - x_≤^3'(v)| ≥· k] ≤[|E[y(v)] - y(v)| ≥[y(v)] ·k ·^4/x_≤^3(v)] ≤[|[y(v)] - y(v)| ≥[y(v)] · k ·^2 ] ≤ 2 ·exp(- (k ·^2)^2 ·[y(v)]/3) ≤ 2 ·exp(- k^2 ·^3/3) ≤10/k^3. Summing over all the possible values of k, we get: [d^100_V(_≤^3,_≤^3')] =[(|x_≤^3(v) - x_≤^3'(v)| - )^+] ≤∑_i = 1^1/[|x_≤^3(v) - x_≤^3'(v)| ≥ k] · k ≤∑_i = 1^1/10 ^2/k^2 ≤ 20. § GENERAL GRAPHS In this section, we extend our algorithms for rounding dynamic fractional matchings from bipartite to general graphs. Our main result in this section is a formalization of the informal <Ref>. Formalizing this theorem and its key subroutine requires some build up, which we present in <Ref>. For now, we restate the main application of this theorem, given by <Ref>. * The rest of this section is organized as follows. In <Ref>, we introduce some known tools from the dynamic matching literature, as well as one new lemma (<Ref>) which motivates us to coarsen known fractional matching algorithms. In <Ref> we provide our general-graph coarsening algorithms for structured fractional matchings. Finally, in the last two subsections, we provide applications of these dynamic coarsening algorithms: computing AMMs, in <Ref>, and rounding -restricted fractional matchings (see <Ref>), in <Ref>. §.§ Section Preliminaries At a high level, our approach for rounding fractional matchings in general graphs is very close to our approach for the same task in their bipartite counterparts. However, as mentioned in <Ref>, since the fractional matching relaxation studied in prior works is not integral in general graphs, we cannot hope to round arbitrary fractional matchings in general graphs. Therefore, in general graphs we focus our attention on a particular structured family of fractional matchings, introduced by <cit.> and maintained dynamically by <cit.>. A fractional matching ∈ℝ^E is (,δ)-almost-maximal ((,δ)-AMFM) if for each edge e∈ E, either x_e ≥δ or some v∈ e satisfies both x(v) ≥ 1- and max_f∈ v x_f ≤δ. For the sake of convenience, and similarly to <Ref>, we will assume that has minimum edge weight of _min≥δ/n. Observe that if all edges with weight below δ/n are decreased to zero, then vertex weights change by at most , resulting in a (2,δ)-AMFM, which for our needs is as good as an (,δ)-AMFM, as we soon illustrate. For similar reasons, we will assume throughout that is a power of two. By LP duality, an (,δ)-AMFM is a (1/2-O())-approximate fractional matching <cit.>. Moreover, prior work established that the support of an (,δ)-AMFM contains a (1/2-O())-approximate integral matching (even in general graphs), provided δ=O(^2/log n) <cit.>. Specifically, they show that the support of such a fractional matching contains the following kind of integral matching sparsifier, introduced in <cit.>. An (,d)-kernel K=(V,E_K) is a subgraph of G=(V,E) satisfying: * d_K(v)≤ d for every vertex v∈ V. * max_v∈ e d_K(v)≥ d· (1-) for every edge e∈ E∖ E_K. Any kernel of a graph G contain an (1/2-)-approximate matching with respect to G for d sufficiently large <cit.>. An alternative proof of this fact was given by <cit.>, who show how to efficiently compute an -AMM (almost maximal matching, itself a (1/2-)-approximate matching) of the host graph in a kernel. Specifically, they show the following. Given an (, d)-kernel of G = (V,E) with d≥^-1, one can compute an -AMM in G in deterministic update time O(d ·μ(G)·^-1·log (^-1)). The proof is given in <cit.>. The only missing detail is externalizing the dependence on of that lemma, which is that of the best static linear-time (1-)-approximate maximum weight matching algorithm, currently O(m·^-1·log (^-1)) for m-edge graphs <cit.>.[In the conference version of that paper, <cit.> do not state the requirement d≥^-1, but this requirement is necessary (and sufficient), as implied by the statement of <cit.>.] Being able to periodically compute -AMMs lends itself to dynamically maintaining -AMMs, due to these matchings' natural stability, as in the following lemma of <cit.> Let ∈ (0, 1/2). If M is an -AMM in G, then the non-deleted edges of M during any sequence of at most ·μ(G) updates constitute a 6-AMM in G. Previous work <cit.> show that a the support of any (,δ)-AMFM contains a kernel, for δ sufficiently small. We now prove the simple fact that the support of an (,δ)-AMFM is itself a kernel, provided _min≥δ. (When AMFMs are kernels). Let be an (,δ)-AMFM of G=(V,E) satisfying _min≥δ. Then, K=(V, x) is an (,δ^-1)-kernel of G. The degree upper bound follows from the condition _min≥δ together with the fractional matching constraint, implying that for each vertex v∈ V, d_K(v) = ∑_f∈ v1[f∈x] ≤∑_f∈ v x_f·δ^-1≤δ^-1. For the lower bound, fix an edge e∈ E∖x, which thus satisfies x_e = 0 < δ. Therefore, by the (,δ)-AMFM property, some vertex v∈ e has x(v)≥ 1- and max_f∈ v x_f ≤δ. But since _min≥δ, this implies that each edge f∋ v in x is assigned value exactly x_f = δ. Therefore, any edge e∈ E∖x has an endpoint v with high degree. d_K(v) = 1[f∈x] = ∑_f∈ v x_f·δ^-1≥δ^-1· (1-). We are now ready to state this section's main result (the formal version of Theorem <ref>), proved in <Ref>. The key idea behind this lemma is that coarsenings of AMFMs yield kernels of a slightly larger graph, obtained by adding O()·μ(G) many dummy vertices, and therefore allow us to periodically compute an O()-AMM M in this larger graph, whch is then an O()-AMM in the current graph. thmgeneralformal Let ϵ∈ (0,1). Let be a dynamic (,/16)-AMFM algorithm with time t_f and output recourse u_f. Let be a dynamic (, /16)-coarsening algorithm with time t_c for vectors satisfying x_e≥/16 implies that (x_e)_i=0 for i>k+4, and whose output ' on satisfies x'_e∈{0,/16} if x_e</16. Then there exists a dynamic -AMM algorithm with time O(^-3·log(^-1)+t_f + u_f· t_c). Moreover, is deterministic//if both and are. While the above stipulations about and may seem restrictive, as we shall later see, these are satisfied by our coarsening algorithms, and by known fractional matchings in general graphs (up to minor modifications). To obtain <Ref>, we apply <Ref> to the O(^-2) time and output recourse (,/16)-AMFM algorithm of <cit.> (see <cit.>), and the coarsening algorithms stated in the following lemma, presented in the following section. For ϵ=2^-k and k≥ 0 an integer, there exist dynamic (,/16)-coarsening algorithms for inputs such that x_e≥/16 implies that (x_e)_i=0 for all i>k+4 that are: * Deterministic, with time O(^-1· ß(log n + log^2(^-1))) = Õ(^-1·log n). * , with time O(^-1· ((loglog n)^2 + log^2(^-1))), succeeding w.h.p. * , with expected time O(^-1·log^2(^-1)) = Õ(^-1). Moreover, these algorithms' output ' for input vector satisfies x'_e∈{0,/16} if x_e < /16. §.§ Proof of <Ref>: Coarsening in General Graphs In this section we show how to leverage our rounding and coarsening algorithms of previous sections to efficiently coarsen AMFMs, giving <Ref>. First, we show that (the internal state of) <Ref> yields a dynamic coarsening algorithm. For =2^-k and k≥ 0 an integer, <Ref> maintains a (2, )-coarsening ' of input dynamic vector ∈ℝ^E_≥ 0 satisfying (x_e)_i=0 for all i > k and edges e with x_e≥ϵ. Moreover, for every edge e∈ E, if x_e < ϵ, then x'_e ∈{0, ϵ}. Recall that <Ref> when run on vector maintains vectors ^(0), ^(1)…. We claim that ^(k) is a (2,)-coarsening of x at all times during the run of the algorithm. Properties <ref>, <ref> and <ref> of (2,)-coarsenings are shown explicitly by Lemmas <ref>, <ref> (and <Ref>) and <ref>, respectively. Finally, for every edge e with x_e <, we have that x^(k)_e∈{0,}, while for each edge e with x_e ≥, we have that x^(k)_e = x_e - ∑_i=k+1^L (x_e)_i· 2^-i = x_e (where the second equality follows from the lemma's hypothesis), which shows Property <ref> of (2,)-coarsenings. Unfortunately, <Ref> is too slow to yield speedups on the state-of-the-art via <Ref>. In contrast, the coarsening algorithms of <Ref> are also insufficient for that theorem's needs, as they only maintain ((^-1·log n),(^-1·log n))-coarsenings, and so to be relevant for <Ref>, they must be run with a much smaller error parameter ' = (/log n), and again be too slow to yield any speedups. However, as we show, similar to our results for rounding bipartite fractional matchings, using the output coarsenings of the algorithms of <Ref> as the input of <Ref> yields fast and sufficiently coarse dynamic coarsenings. For this, we need the following simple lemma. (Coarsening composition). Let _1,_2∈ [0,1] and δ_1≤δ_2. If ^(1) is an (_1,δ_1)-coarsening of and ^(2) is an (_2,δ_2)-coarsening of ^1, then ^(2) is an (_1 + 2_2,δ_2)-coarsening of . Since ^(2) and ^(1) are coarsenings of ^(1) and , respectively, we have that (^(2)) ⊆(^(1)) ⊆, implying Property <ref>. To prove that ^(2) satisfies properties <ref> and <ref> of an (_1+2_2, δ_2)-coarsening of , we first note that by Property <ref>, x^(1)≤ (1+_1)·x. Consequently, Property <ref> follows by the triangle inequality as follows. |x-^(2)| ≤|-^(1)| + |^(1)-^(2)| ≤_1 · + _1 + _2 ·^(1) + _2 Property <ref> ≤_1 · + _1 + _2 · (1+_1) · + _2 ≤ (_1+2_2) ·x + _1+2_2. _1≤ 1 Property <ref> similarly follows from <Ref>. d_V^_1 + 2_2(,^(2)) ≤ d_V^_1 + _2(,^(2)) <Ref> ≤ d_V^_1(,^(1)) + d_V^_2(^(1),^(2)) <Ref> ≤_1 · + _1 + _2 ·^(1) + _2, Property <ref> where we already established that the latter term is at most (_1+2_2)·x + (+2_2). Finally, by Property <ref>, if x_e≥δ_2(≥δ_1), then x^(2)_e=x^(1)_e=x_e and otherwise x^(1)_e < δ_2 and hence x^(2)_e∈ [δ,2δ). That is, ^(2) satisfies Property <ref> of an (_1+2_2,δ_2)-coarsening of . Given <Ref>, an algorithm reminiscent of the rounding algorithm of <Ref> allows us to combine two dynamic coarsening algorithms, as in the following dynamic composition lemma: lemcoarsentoAMFM (Composing dynamic coarsenings). Let _1,_2∈ [0,1] and δ_1≤δ_2. For i=1,2, let _i be a dynamic (_i,δ_i)-coarsening algorithms with times t^_i_U := t^_i_U(_i, δ_i, n) and times O(|x|· t^_i_I), for t^_i_I := t^_i_I(_i,δ_i,n). Then, there exists a dynamic (O(_1+_2),δ_2)-coarsening algorithm with time O(t_U^_1 + t^_2_U + ^-1· t^_2_I) and time O(|x|· (t^_1_I+t^_2_I)), which is deterministic//if both _1 and _2 are. Moreover, it suffices for _2 to be a coarsening algorithm only for inputs that are a subset of the support of an output of _1. As the proof is near identical to that of <Ref>, we only outine one salient difference. The algorithm broadly follows the logic of <Ref>, with _1,_2,_1 and δ_1 playing the roles of ,, and δ in that lemma. The outline of the algorithm and the resulting running time and determinism/adaptivity are as in <Ref>. The only difference in correctness analysis is that by <Ref>, taking the non-deleted edges in (x^(1))∩{e| x_e<δ_1} periodically (i.e., after every _1·x/δ_1 updates and using them as input to (x^(1))∩{e| x_e<δ_1}) allows us to maintain an (O(_1),δ_1)-coarsening of as the (dynamic) input for _2 throughout. (Note that _2 is a dynamic coarsening algorithm for the resultant vector dominated by ^(1), by the lemma's hypothesis.) That the dynamic output of _2 is an (O(_1+_2),δ_2)-coarsening of then follows <Ref>. <Ref> then follows from <Ref> in much the same way that <Ref> follows from <Ref> in <Ref>, by taking the coarsening algorithms of <Ref> as _1 and <Ref> as the second coarsening algorithm _2 (here, relying on <Ref>). We omit the details to avoid repetition. §.§ Proof of Theorem <ref>: Maintaining AMMs In this section we show how to maintain O(ϵ)-AMMs, by dynamically coarsening AMFMs (motivated by <Ref>), and periodically using these coarsened AMFMs to compute such AMMs. Our first lemma provides a static algorithm substantiating the intuition that a coarsening of an AMFM allows to efficiently compute an AMM (under mild conditions, which we address below). Let 0≤δ≤ϵ≤ 1. If μ(G)≥ 2δ^-1ϵ^-1, then there exists a deterministic algorithm that, given an (,δ)-AMFM of G and an (ϵ,δ)-coarsening ' of satisfying x'_e∈{0,δ} if x_e<δ, outputs an O(+δ)-AMM of G in time O(μ(G)·ϵ^-1·δ^-1·log(^-1)). We modify ' within the desired time bounds to obtain an (ϵ”,δ”)-AMFM ” with ”_min≥δ” (for ”=Θ(ϵ+δ) and δ”=Θ(δ) to be determined) in a graph G”=(V∪ V”,E∪ E”) obtained from G by adding some |V”| = O(ϵ”)·μ(G) many dummy vertices V” as well as some dummy edges E” to G. Therefore, by <Ref>, the support of ” is an (ϵ”,(δ”)^-1)-kernel of G”. Hence, by <Ref> we can compute deterministically an ”-AMM M” of G” in the claimed running time. Now, by definition of an ϵ”-AMM, M” is maximal in a subgraph obtained from G” after removing a set of vertices U⊆ V∪ V” of cardinality at most |U|≤ϵ”·μ(G”)≤ϵ”· (1+O(”))·μ(G)≤ O(ϵ”)·μ(G). Consequently, M:=M”∩ E is maximal in a subgraph obtained from G after removing at most |U|+|V”|≤ O(ϵ”)·μ(G)=O(ϵ+δ)·μ(G) many vertices, including the at most O(”)·μ(G) many vertices of V matched by M” to dummy vertices in V”. It remains to construct G” and ”. First, invoking <Ref>, we deterministically compute a bounded (ϵ',δ)-coarsening of , for ϵ':=3(ϵ+δ), in time O(|(')|) ≤ O(δ^-1·x) ≤ O(δ^-1·μ(G)). Here, the first inequality follows from '_min≥δ, while the second follows from the integrality gap of the degree-bounded fractional matching polytope in general graphs implying that x≤3/2·μ(G). By Property <ref> of bounded coarsenings, y(v)≤ x(v)+ϵ'≤ 1+ϵ'. We therefore consider :=/(1+ϵ'), which by the above is a fractional matching with _min≥δ':=δ/(1+ϵ'). Moreover, since z = y/(1+')≥y· (1-ϵ') and hence ∑_v |y(v)-z(v)|≤ 2ϵ'y, we have that d^15'_V(,) ≤ d^'_V(,) + d^14'_V(,) <Ref> ≤ d^'_V(,) +d^0_V(,) <Ref> ≤ d^'_V(,) + 2ϵ'y ≤'·+ϵ'+ 2ϵ'· ((1+ϵ')·x + ϵ') +2ϵ' Properties <ref> and <ref> ≤ 15ϵ'·x+15ϵ'. ϵ'=3(ϵ+δ)≤ 6. That is, is a fractional matching that satisfies Property <ref> of a (15ϵ',δ')-coarsening of . Now, we say a node v is useful if x(v)≥ 1-ϵ and max_f∋ vx_f≤δ (noting their relevance in <Ref>). By Property <ref>, and again using the integrality gap, we have that ∑_v useful(|x(v)-z(v)|-15ϵ')^+≤ d^15ϵ'_V(x,z)≤ (15ϵ'·x+15ϵ') ≤ 30ϵ'·x≤ 45 ϵ'·μ(G). We now create a set V” of 90·ϵ'·μ(G) many dummy vertices, and create a fractional matching ” from , letting x”_e=z_e for each edge e∈ E and adding dummy edges e∈ E” with ”_e = δ”:=δ'= δ/(1+)≥δ/2. Specifically, for each useful vertex v, we add some ⌈ (x(v)-z(v)-15')^+· (δ”)^-1⌉≤ 2x(v)· (δ”)^-1≤ 4δ^-1 dummy edges to distinct dummy vertices (this is where we use that ·μ(G')≥δ^-1), chosen in round-robin fashion. This guarantees that each useful vertex v has x”(v)≥ x(v)-15'. That ” is a fractional matching, and in particular dummy nodes u have x(u)≤ 1, follows from <Ref>, as the amount of total ”-value on dummy edges is at most 90'·μ(G)≤ |V”|, distributed in round-robin fashion. Finally, we note that any edge e with x”_e<δ' has z_e<δ' and so z_e=0, by Property <ref>, and so y_e=0, implying in turn that x_e<δ, again by Property <ref>, and so there exists a useful v∈ e, by <Ref>. Therefore, x(v)≥ 1-ϵ, and so x”(v)≥ x(v)-15ϵ'≥ 1-ϵ-15ϵ'. Moreover, all edges f∋ v have x_f≤δ, and so x'_f∈{0,δ} by Property <ref> and the lemma's hypothesis. Consequently, y_f∈{0,δ}, and so x”_f∈{0,δ'}. That is, letting ”:=ϵ-15ϵ' and δ”:=δ', we conclude that ” is an (ϵ”,δ”)-AMFM with ”_min≥δ” in the graph G” obtained by adding at most 90ϵ'·μ(G)=O(ϵ+δ)·μ(G) many dummy vertices to G. The lemma follows. The above lemma allows us to compute an AMM quickly when μ(G) is large. The following simple complementary lemma allows us to compute a maximal matching quickly when μ(G) is small. There exists a static deterministic algorithm that given an O(1)-approximate vertex cover U of graph G=(V,E), compute a maximal matching in G in time O(μ(G)^2). First, the algorithm computes a maximal matching M in G[U], taking O(|U|^2)=O(μ(G)^2) time, since an O(1)-approximate vertex cover has size O(μ(G)).[The endpoints of any maximum matching form a vertex cover of size 2μ(G).] Then, we extend M to be a maximal matching in all of G as follows: for each unmatched vertex u∈ U, we scan the neighbors of u until we find an unmatched neighbor v (in which case we add (u,v) to M) or until we run out of neighbors of u. As |M|≤μ(G) at all times by definition of μ(G), we scan at most 2μ(G)+1 neighbors per node u∈ U until we match u (or determine that all of its neighbors are unmatched), for a total running time of O(U·μ(G))=O(μ(G)^2). Since each edge in G has an endpoint in the vertex cover U, this results in a maximal matching. We are finally ready to prove Theorem <ref>, restated for ease of reference. * Our algorithm dynamically maintains data structures, using which it periodically computes an ϵ-AMM, the non-deleted edges of which serve as its matching during the subsequent period. A period at whose start the algorithm computes an ϵ-AMM M consists of ⌊ϵ· |M|⌋≤ϵ·μ(G) updates, and so by <Ref>, the undeleted edges of M are a 6ϵ-AMM throughout the period. We turn to describing the necessary data structures and analyzing their update time and the (amortized) update time of the periodic AMM computations. Throughout, we assume without loss of generality that =2^-k for k≥0 some integer. The data structures. First, we maintain an (ϵ,ϵ/16)-AMFM using update time t_f, and update recourse u_f. For each edge x_e≥/16, we set x_e=/16, noting that this does not affect the salient properties of an (,/16)-AMFM. This second assumption implies that is a coarsening algorithm for the dynamic AMFM we will wish to coarsen. This implies that coarsening algorithm can be run on to maintain an (ϵ,δ)-coarsening ' of , where each change to incurs update time t_c, for a total update time of O(t_f+u_f· t_c). In addition, using <cit.>, we deterministically maintain an O(1)-approximate vertex cover U⊆ V using O(1) update time,[Generally, <cit.> maintain a (2+ϵ)-approximate vertex cover in O(ϵ^-2) for any ϵ∈ (0,1).] which is dominated by the above update time. The end of a period. When a period ends (and a new one begins), both the current matching M and the new matching M' that we compute for the next matching are O(ϵ)-AMMs, and so |M|=Θ(μ(G)) and |M'|=Θ(μ(G)). We compute M' as follows. If |M|≤^-2, we run the algorithm of <Ref>, using the O(1)-vertex cover U, in time O(μ(G)^2)=O(|M'|^2), amortized over ϵ· |M'| updates, for an amortized update time of O(|M'|^2/(ϵ· |M'|))=O(ϵ^-1· |M'|)=O(ϵ^-3). Otherwise, we have that (μ(G)≥)|M|>2ϵ^-2, and we use and ' and <Ref> to compute M' in time O(μ(G)·ϵ^-2·log(ϵ^-1)) = O(|M'|·ϵ^-2·log(ϵ^-1)), for an amortized update time of O(|M|·ϵ^-2·log(ϵ^-1))/(ϵ· |M'|)=O(ϵ^-3·log(ϵ^-1)). To summarize, the above algorithm maintains an O(ϵ)-AMM within the claimed running time. As this algorithm consists of deterministic steps (including those of the vertex-cover algorithm of <cit.>) and runs algorithms and , the combined algorithm is deterministic//if the latter two algorithms are. §.§ Restricted Fractional Matchings To conclude this section, we briefly note that the same approach underlying <Ref> also allows us to round other known structured fractional matchings in general graphs. In particular, we can also round the following fractional matchings, introduced by <cit.>. Fractional matching ∈ℝ^E_≥ 0 is -restricted if x_e∈ [0,]∪{1} for each edge e∈ E. The interest in restricted fractional matchings (in general graphs) is that their integrality gap is low, in the following sense (see, e.g., <cit.>). Let be an α-approximate fractional matching in G (i.e., x≥α·μ(G)) that is also -restricted. Then, x contains an α· (1-)-approximate integral matching. thmgeneralrestricted Let =2^-k for k≥ 11 an integer. Let be a dynamic α-approximate -restricted fractional matching algorithm with update time t_f and output recourse u_f. Let be a dynamic (16, )-coarsening algorithm with time t_c for vectors satisfying x_e≥/16 implies that (x_e)_i=0 for i>k+4. Then there exists a dynamic α(1-O())-approximate matching algorithm with update time O(^-3+t_f + u_f· t_r). Moreover, is deterministic//if both and are. The proof outline again mirrors that of <Ref> (and <Ref> before it), maintaining essentially the same data structures, and so we only discuss the key difference. The maintained (O(),)-coarsening ' of the dynamic -restricted fractional matching is 2-restricted fractional matching, by Property <ref>. Moreover, by Property <ref> and x≥ 1, we have that '≥ (1-32)x. Therefore, by <Ref>, (x') contains an integral matching M^* of cardinality |M^*|≥'· (1-32)≥x· (1-32)^2 ≥α(1-1024)·μ(G). Now, if instead of computing an AMM periodically, as in <Ref>, we compute an -approximate matching M in time O(^-1|x|)=O(^-1x/) = O(^-2·μ(G)).[For example, we can use the -approximate maximum-weight matching algorithm of <cit.>, which runs in time O(^-1· m) in m-edge unweighted graphs.] We do so every ·μ(G) updates, thus maintaining a (1-3)-approximation of |M^*| throughout by “stability” of the matching problem (see <cit.>. This then results in an α(1-1024)·(1-3)≥α(1-O())-approximation, with essentially the same update time of <Ref>, other than the O(^-3·log(^-1)) term in the update time, which is improved to O(^-3). By combining <Ref> with <Ref>, we obtain efficient dynamic rounding algorithms for -restricted fractional matchings, with update times and determinism/adaptivity as the stated in <Ref> § DECREMENTAL MATCHING In this section we discuss applications of our rounding algorithms to speeding up decremental matching algorithms. Prior results are obtained by a number of fractional algorithms <cit.>, combined with known rounding algorithms (or variants thereof <cit.>). We show how our (partial) rounding algorithms yield speeds ups on known decremental bipartite matching algorithms, and bring us within touching distance of similar deterministic results in general graphs. Robust fractional matchings. The approach underlying <cit.> is to phaseically compute a robust fractional matching , in the sense that the value of restricted to non-deleted edges remains (1-)-approximate with respect to the current maximum (integral) matching in the subgraph induced by non-deleted edges, unless the latter decreases by a (1-O()) factor. For example, in a complete graph, a uniform fractional matching is robust, while an integral matching is not, as deleting its edges would yield a 0-approximation to the maximum matching, which would be unaffected by such deletions. Formally, the above works implement the following. An decremental fractional matching algorithm partitions the sequence of deletions into P phases of consecutive deletions (determined during the algorithm's run), and computes a fractional matching ^i at the start of phase i, where ^i restricted to non-deleted edges is a (1-)-approximate throughout phase i (i.e., until ^i+1 is computed). As for dynamic algorithms, we may consider decremental fractional matching algorithms that are either deterministic or adaptive. All currently known such algorithms are adaptive, and even deterministic. In what follows, we show how our rounding algorigthms can be used to round known decremental fractional matching algorithms to obtain new state-of-the-art decremental (integral) matching algorithms, starting with bipartite graphs. §.§ Bipartite Graphs Applying the rounding algorithms of <Ref>, one immediately obtains a framework to rounding such decremental fractional bipartite matching algorithms. Let be an decremental fractional bipartite matching algorithm, using total time t^ and at most p^ phases. Let be an -approximate bipartite rounding algorithm with time t^_U and time t^_I(). Then, there exists a (1-2)-approximate decremental bipartite matching algorithm which on graph starting with m edges takes total time O(t^ + ∑_i=1^p^ t^_I(^i) + m· t^_U). Algorithm is deterministic//if both and are. The combined algorithm is direct. Whenever computes a new fractional matching ^i, we run (G,^i,) in algorithm . Between computations of fractional matchings ^i and ^i+1, we call (e,0) for each edge e deleted. The running time is trivially O(t^ + ∑_i=1^p^ t^_I(^i) + m· t^_U). As for the approximation ratio, the fractional matching throughout a phase (i.e., the ^i value of undeleted edges between computation of ^i and ^i+1) is (1-)-approximate with respect to the current maximum matching μ(G), and the rounding algorithm maintains a matching M⊆x with the desired approximation ratio, |M| ≥ (1-)·x≥ (1-)^2·μ(G)≥ (1-2)·μ(G). Finally, as Algorithm only uses algorithms and , the former is deterministic//if the latter two are. <Ref> in conjunction with <Ref> yield a number of improvements for the decremental bipartite matching problem (see <Ref>).[The rounding algorithm of <cit.> only states its n-dependence, which is log^4n, but a careful study shows that their -dependence is ^-6.] In particular, these theorems yield decremental (integral) bipartite matching algorithms with the same update time as the current best fractional algorithms for the same problems <cit.>. Our improvements compared to prior work follows both from our faster update times (previously of the order Ω(^-4), and some with high polylog dependencies), as well as our rounding algorithms' initialization times, which avoid us paying the update time per each of the potentially p^· m many edges in the supports of ^1,^2,…,x^p^. We discuss the fractional algorithms of <cit.>, substantiating the results in the subsequent table, in <Ref>. §.§ Potential Future Applications: General Graphs In <cit.>, Assadi, Bernstein and Dudeja follow the same robust fractional matching approach of <cit.>, and show how to implement it in general graphs. By assuming that μ(G)≥· n throughout, which only incurs a (randomized) (log n) slowdown, by known vertex sparsification results <cit.>, they show how to maintain an fractional matching that is -restricted (<Ref>) in total time m· 2^O(^-1). By adapting the rounding scheme of <cit.>, they then round this fractional matching randomly (but adaptively). By appealing to <Ref> and <Ref>, we can similarly round -restricted fractional matchings deterministically, with a log n·(^-1) update time. Combining with deterministic vertex sparsification with a n^o(1) slowdown <cit.>, we obtain almost all ingredients necessary for the first sub-polynomial-time deterministic (1-)-approximate decremental matching algorithm in general graphs. Unfortunately, the fractional algorithm of <cit.> relies on randomization in one additional crucial step to compute an fractional matching, namely in the subroutine M-or-E^*(), that outputs either a large fractional matching respecting some edge capacity constraints, or a minimum cut (whose capacities one then increased). For bipartite graphs, <cit.> gave a deterministic implementation of this subroutine, though for general graphs this seems more challenging, and so <cit.> resorted to randomization to implement this subroutine. Our work therefore leaves the de-randomization of this subroutine as a concrete last step to resolving in the affirmative the following conjecture. There exists a determinsitic decremental (1-)-approximate matching algorithm with total update time m· n^o(1), for any constant >0. § APPENDIX § A ALGORITHM In this section we provide an implementation and analysis of the algorithm guaranteed by <Ref>, restated below. * First, suppose G is a simple graph (i.e., it contains no parallel edges). We later show how to easily extend this to a multigraph with maximum multiplicity two. Recall that a walk in a graph is a sequence of edges (e_1,…,e_k), with e_i={v_i,v_i+1}. We call v_1 and v_k+1 the extreme vertices of . We call odd-indexed and even-indexed edges e_i odd or even for short. The walk is a cycle if v_1=v_k+1, and the walk is maximal if it cannot be extended, i.e., if there exists no edge e={v_1,v} or e={v_k+1,v} outside of . While E≠∅, Algorithm repeatedly computes maximal walks , removes these walks' edges from E and adds all odd edges (resp., even edges) of to the smaller (resp., larger) of E_1 and E_2, breaking ties in favor of E_1. This algorithm clearly runs in linear time, as the O(||) time to compute a walk decreases |E| by ||. Next, since each edge is added to one of E_1 or E_2, we have that |E|=|E_1|+|E_2|. Moreover, since the number of odd and even edges differ by at most one for each walk (the former being no less plentiful), we have that |E_1|≥ |E_2| and ||E_1|- |E_2| |≤ 1 throughout. Property <ref> follows. Finally, fix a vertex v and i∈{1,2}. By maximality of the walks computed, once v is an extreme vertex of at most one walk. Thus, all but at most two edges of v are paired into successive odd/even edges in some walks computed. Consequently, the number of odd and even edges of v can differ by 0, in which case d_i(v)=d_G(v)/2, they may differ by 1, in which case d_i(v) ∈{⌊d_G(v)/2⌋, ⌈d_G(v)/2⌉}, or they may differ by 2, in which case v has two more odd edges than even edges, the last walk v belongs to is an odd-length cycle (and hence G is not bipartite) and we have that d_i(v)∈{d_G(v)/2-1, d_G(v)/2+1}. Properties <ref> and <ref> follow. Now, to address the (slightly) more general case where some edges have two parallel copies, we let G' be the simple graph obtained after removing all parallel edges from G, and let E'_1 and E'_2 be the edge sets computed by running on G'. Then, for each pair of parallel edges (e,e') in G, we arbitrarily add one of the pair to E_1 and add the other to E_2. Finally, we let E_1 E_1∪ E'_1 and E_2 E_2∪ E'_2. It is easy to see that the sets E_1 and E_2 are simple (by construction), and that these sets then satisfy all three desired properties with respect to G, since the edge sets E'_1 and E'_2 satisfy these properties with respect to G', and every vertex v has all of its (parallel) edges in E∖ E' evenly divided between E_1 and E_2. The linear running time is trivial given the linear time to compute E'_1,E'_2. § DYNAMIC SET SAMPLING In this section we provide an data structure for the dynamic set sampling problem (restated below). Recall that this is the basic problem of maintaining a dynamic subset of [n] where every element is included in the subset independently with probability p_i under dynamic changes to p_i and re-sampling. This basic problem was studied by <cit.>. * Our main result of this section is that we can implement in each operation in total time linear in the number of operations, n, and the size of the T output. * Comparison with the concurrent work of <cit.>. Our solution is somewhat simpler than that of the concurrent set sampler of <cit.>. For our purposes, the only important difference is that our algorithm is provably . Our Algorithm <ref> and associated proof of Theorem <ref> stem from a simple insight about computing when an element i∈[n] will be in the output of (). Note that if there are no operations of the form (i,α), then the probability i is in any individual output of () is p_i. Consequently, the probability that p_i is not in the output of () for the next t calls to () is (1-p_i)^t. Therefore, the number of calls to () it takes for i to be in the output sample follows the geometric distribution with parameter p_i, i.e. (p_i)! Leveraging this simple insight above leads to an efficient set sampler data structures. Naively, implementing set sampling takes O(n) time per call to (), used to determine for each element i whether or not it should be in the output. However, we could instead simply sample from (p_i) (in expected O(1), using <cit.>) whenever (i,α) is called or i is in the output of (), in order to determine the next call to () which will result in i being in the output. Provided we can efficiently keep track of this information for each i, this would yield the desired bounds in Theorem <ref>. Unfortunately, when sampling from (p_i), the output could be arbitrarily large (albeit with small probability). Further, maintaining the data structure for knowing when i is scheduled to be in the output of () would naively involve maintaining a heap on arbitrary large numbers, incurring logarithmic factors. There are many potential data-structures and techniques to solve this problem. In our Algorithm <ref> we provide one simple, straightforward solution. Every n calls to (), we “rebuild” our data structure and rather than sampling from (p_i) to determine the next call to () that will output i, we instead simply sample to determine the next call to () before the rebuild that will output i (if there is one). Algorithm <ref> simply does this, resampling this time for i whenever (i,α) is called. Algorithm <ref> is written without an efficient determination of the next output time for each element, so that it is clear that this algorithm is a set sampler. In the following Lemma <ref> we show how to perform this efficient determination or, more precisely, implementing the for-loop in Lines <ref>-<ref> We then use this lemma to prove Theorem <ref>. The for-loop in Lines <ref>-<ref> of Algorithm <ref> can be implemented in expected O(1) time in the word RAM model. The loop executes the return statement with a value of j∈[t,n] with probability p^j-t(1-p). Consequently, if we let ℓ≥0 be sampled by the geometric distribution with probability p, i.e., [ℓ=v]=p^v(1-p) for all v∈_>0, and if ℓ∈{0,...,n-t} simply execute the return statement with j=t+ℓ and otherwise set τ_i=n+1, then this is equivalent to the lines of the for loop. Since sampling from a geometric distribution (p) in this manner can be implemented in expected O(log(1/p)/w)=O(1) time in the word RAM model <cit.>, the result follows. Algorithm <ref> maintains that after each operation (, or ), each i∈[n] is a member of at most one T_j. Further, if i∈ T_j, then τ≤ j and τ_i=j and moreover τ_i=n+1 if and only if i∉ T_j for any j∈[n]. Further, the algorithm is designed (as discussed) so that T_τ is a valid output of at time τ (for any updates of an adversary, that is unaware of T_j for j≥τ). Since the algorithm also ensures that τ≤ n, the algorithm has the desired output. Further, from these properties it is clear that the algorithm can be implemented in O(n) space. It only remains to bound the running time for implementing the algorithm. To analyze the running time of the algorithm, first note that by Lemma <ref>, each operation can be implemented in expected O(1) time. Consequently, can be implemented in expected O(n) time. Further, since T=() simply calls for elements in its output or for all n elements after every n times it is called, it has the desired expected runtime O(1+|T|) as well. § KNOWN DECREMENTAL FRACTIONAL BIPARTITE MATCHING ALGORITHMS For completeness, we briefly discuss the parameters of the decremental bipartite algortihms of <cit.>, which together with <Ref> and <Ref> yield the results of <Ref>. There exists an decremental fractional bipartite matching algorithm with total time O(m·^-4·log^3n) and O(^-1·log n) phases. The algorithm of <cit.> consists of O(^-1·log n) phases, where a phase starts when the maximum matching size of G decreases by a factor of (1-). The bottleneck of each phase is the O(m·^-2·log^3 n)-time subroutine Robust-Matching, which computes a fractional matching that does not change during the phase (aside from nullification of value on deleted edges), and is guaranteed to remain (1-)-approximate during the phrase <cit.>. This subroutine's running time is O(m·^-3·log^2n). Thus, the algorithm's total running time is O(m·^-4·log^3n). There exist decremental fractional bipartite matching algorithms that are: * Deterministic, with total time O(m·^-3·log^5 n) and O(ϵ^-2·log^2 n) phases <cit.>. * , with total time O(m·^-2· n^o(1)) and O(ϵ^-2·log^2 n) phases <cit.>. § ACKNOWLEDGEMENTS Thank you to Arun Jambulapati for helpful conversations which were foundational for the static bipartite rounding algorithm in this paper and subsequent developments. alpha